Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N
2015-03-01
A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-01
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space (H, L, and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved. PMID:28106806
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-18
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space ( H , L , and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved.
Arun, Mike W J; Yoganandan, Narayan; Stemper, Brian D; Pintar, Frank A
2014-12-01
While studies have used acoustic sensors to determine fracture initiation time in biomechanical studies, a systematic procedure is not established to process acoustic signals. The objective of the study was to develop a methodology to condition distorted acoustic emission data using signal processing techniques to identify fracture initiation time. The methodology was developed from testing a human cadaver lumbar spine column. Acoustic sensors were glued to all vertebrae, high-rate impact loading was applied, load-time histories were recorded (load cell), and fracture was documented using CT. Compression fracture occurred to L1 while other vertebrae were intact. FFT of raw voltage-time traces were used to determine an optimum frequency range associated with high decibel levels. Signals were bandpass filtered in this range. Bursting pattern was found in the fractured vertebra while signals from other vertebrae were silent. Bursting time was associated with time of fracture initiation. Force at fracture was determined using this time and force-time data. The methodology is independent of selecting parameters a priori such as fixing a voltage level(s), bandpass frequency and/or using force-time signal, and allows determination of force based on time identified during signal processing. The methodology can be used for different body regions in cadaver experiments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Signal processing of bedload transport impact amplitudes on accelerometer instrumented plates
USDA-ARS?s Scientific Manuscript database
This work was performed to help establish a data processing methodology for relating accelerometer signals caused by impacts of gravel on steel plates to the mass and size of the transported material. Signal processing was performed on impact plate data collected in flume experiments at the Nationa...
Panigrahy, D; Sahu, P K
2017-03-01
This paper proposes a five-stage based methodology to extract the fetal electrocardiogram (FECG) from the single channel abdominal ECG using differential evolution (DE) algorithm, extended Kalman smoother (EKS) and adaptive neuro fuzzy inference system (ANFIS) framework. The heart rate of the fetus can easily be detected after estimation of the fetal ECG signal. The abdominal ECG signal contains fetal ECG signal, maternal ECG component, and noise. To estimate the fetal ECG signal from the abdominal ECG signal, removal of the noise and the maternal ECG component presented in it is necessary. The pre-processing stage is used to remove the noise from the abdominal ECG signal. The EKS framework is used to estimate the maternal ECG signal from the abdominal ECG signal. The optimized parameters of the maternal ECG components are required to develop the state and measurement equation of the EKS framework. These optimized maternal ECG parameters are selected by the differential evolution algorithm. The relationship between the maternal ECG signal and the available maternal ECG component in the abdominal ECG signal is nonlinear. To estimate the actual maternal ECG component present in the abdominal ECG signal and also to recognize this nonlinear relationship the ANFIS is used. Inputs to the ANFIS framework are the output of EKS and the pre-processed abdominal ECG signal. The fetal ECG signal is computed by subtracting the output of ANFIS from the pre-processed abdominal ECG signal. Non-invasive fetal ECG database and set A of 2013 physionet/computing in cardiology challenge database (PCDB) are used for validation of the proposed methodology. The proposed methodology shows a sensitivity of 94.21%, accuracy of 90.66%, and positive predictive value of 96.05% from the non-invasive fetal ECG database. The proposed methodology also shows a sensitivity of 91.47%, accuracy of 84.89%, and positive predictive value of 92.18% from the set A of PCDB.
NASA Astrophysics Data System (ADS)
Schmidt, S.; Heyns, P. S.; de Villiers, J. P.
2018-02-01
In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.
An Interdisciplinary Approach for Designing Kinetic Models of the Ras/MAPK Signaling Pathway.
Reis, Marcelo S; Noël, Vincent; Dias, Matheus H; Albuquerque, Layra L; Guimarães, Amanda S; Wu, Lulu; Barrera, Junior; Armelin, Hugo A
2017-01-01
We present in this article a methodology for designing kinetic models of molecular signaling networks, which was exemplarily applied for modeling one of the Ras/MAPK signaling pathways in the mouse Y1 adrenocortical cell line. The methodology is interdisciplinary, that is, it was developed in a way that both dry and wet lab teams worked together along the whole modeling process.
Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min
2017-10-25
Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Goggins, Sean; Marsh, Barrie J; Lubben, Anneke T; Frost, Christopher G
2015-08-01
Signal transduction and signal amplification are both important mechanisms used within biological signalling pathways. Inspired by this process, we have developed a signal amplification methodology that utilises the selectivity and high activity of enzymes in combination with the robustness and generality of an organometallic catalyst, achieving a hybrid biological and synthetic catalyst cascade. A proligand enzyme substrate was designed to selectively self-immolate in the presence of the enzyme to release a ligand that can bind to a metal pre-catalyst and accelerate the rate of a transfer hydrogenation reaction. Enzyme-triggered catalytic signal amplification was then applied to a range of catalyst substrates demonstrating that signal amplification and signal transduction can both be achieved through this methodology.
Signal processing methodologies for an acoustic fetal heart rate monitor
NASA Technical Reports Server (NTRS)
Pretlow, Robert A., III; Stoughton, John W.
1992-01-01
Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use.
Signal Restoration of Non-stationary Acoustic Signals in the Time Domain
NASA Technical Reports Server (NTRS)
Babkin, Alexander S.
1988-01-01
Signal restoration is a method of transforming a nonstationary signal acquired by a ground based microphone to an equivalent stationary signal. The benefit of the signal restoration is a simplification of the flight test requirements because it could dispense with the need to acquire acoustic data with another aircraft flying in concert with the rotorcraft. The data quality is also generally improved because the contamination of the signal by the propeller and wind noise is not present. The restoration methodology can also be combined with other data acquisition methods, such as a multiple linear microphone array for further improvement of the test results. The methodology and software are presented for performing the signal restoration in the time domain. The method has no restrictions on flight path geometry or flight regimes. Only requirement is that the aircraft spatial position be known relative to the microphone location and synchronized with the acoustic data. The restoration process assumes that the moving source radiates a stationary signal, which is then transformed into a nonstationary signal by various modulation processes. The restoration contains only the modulation due to the source motion.
DOT National Transportation Integrated Search
1969-04-01
Male subjects were tested after extensive training as two five-man 'crews' in an experiment designed to examine the effects of signal rate on the performance of a task involving the monitoring of a dynamic process. Performance was measured using thre...
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Miko, Joseph; Bradley, Damon; Heinzen, Katherine
2008-01-01
NASA Hubble Space Telescope (HST) and upcoming cosmology science missions carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light (L3) and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated reference pixels, which can be used to reduce noise attributed to sensors. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels, as employed by the Hubble Space Telescope and the James Webb Space Telescope Projects. These methods involve using spatial and temporal statistical parameters derived from boundary reference pixel information to enhance the active (non-reference) pixel signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert- Huang Transform Data Processing System (HHT-DPS) for reference pixel information processing and its utilization in reconfigurable hardware on-board a spaceflight instrument or post-processing on the ground. The methodology examines signal processing for a 2-D domain, in which high-variance components of the thermal noise are carried by both active and reference pixels, similar to that in processing of low-voltage differential signals and subtraction of a single analog reference pixel from all active pixels on the sensor. Heritage methods using the aforementioned statistical parameters in the digital domain (such as statistical averaging of the reference pixels themselves) zeroes out the high-variance components, and the counterpart components in the active pixels remain uncorrected. This paper describes how the new methodology was demonstrated through analysis of fast-varying noise components using the Hilbert-Huang Transform Data Processing System tool (HHT-DPS) developed at NASA and the high-level programming language MATLAB (Trademark of MathWorks Inc.), as well as alternative methods for correcting for the high-variance noise component, using an HgCdTe sensor data. The NASA Hubble Space Telescope data post-processing, as well as future deep-space cosmology projects on-board instrument data processing from all the sensor channels, would benefit from this effort.
Brady, S L; Kaufman, R A
2012-06-01
The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12% will be reflected in the dosimetry results. The calibration process must emulate the eventual CT dosimetry process by matching or excluding scatter when calibrating the MOSFETs. Finally, the authors recommend that the MOSFETs are energy calibrated approximately every 2500-3000 mV. © 2012 American Association of Physicists in Medicine.
Noise-assisted data processing with empirical mode decomposition in biomedical signals.
Karagiannis, Alexandros; Constantinou, Philip
2011-01-01
In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
A methodology for cloud masking uncalibrated lidar signals
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; D'Amico, Giuseppe; Baars, Holger; Belegante, Livio; Marinou, Eleni
2018-04-01
Most lidar processing algorithms, such as those included in EARLINET's Single Calculus Chain, can be applied only to cloud-free atmospheric scenes. In this paper, we present a methodology for masking clouds in uncalibrated lidar signals. First, we construct a reference dataset based on manual inspection and then train a classifier to separate clouds and cloud-free regions. Here we present details of this approach together with an example cloud masks from an EARLINET station.
Cui, De-Mi; Wang, Xiao-Quan; Lu, Lie-Min
2017-01-01
Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT’s turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts’ interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology’s effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction. PMID:29068431
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling
Lareo, Angel; Forlim, Caroline G.; Pinto, Reynaldo D.; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox. PMID:27766078
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling.
Lareo, Angel; Forlim, Caroline G; Pinto, Reynaldo D; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox.
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2004-10-12
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2003-04-22
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2006-02-14
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2001-01-01
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a SPRT sequential probability ratio test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
On the use of EEG or MEG brain imaging tools in neuromarketing research.
Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio
2011-01-01
Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries.
On the Use of EEG or MEG Brain Imaging Tools in Neuromarketing Research
Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio
2011-01-01
Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries. PMID:21960996
Characterizing Postural Sway during Quiet Stance Based on the Intermittent Control Hypothesis
NASA Astrophysics Data System (ADS)
Nomura, Taishin; Nakamura, Toru; Fukada, Kei; Sakoda, Saburo
2007-07-01
This article illustrates a signal processing methodology for the time series of postural sway and accompanied electromyographs from the lower limb muscles during quiet stance. It was shown that the proposed methodology was capable of identifying the underlying postural control mechanisms. A preliminary application of the methodology provided evidence that supports the intermittent control hypothesis alternative to the conventional stiffness control hypothesis during human quiet upright stance.
Detection and Processing Techniques of FECG Signal for Fetal Monitoring
2009-01-01
Fetal electrocardiogram (FECG) signal contains potentially precise information that could assist clinicians in making more appropriate and timely decisions during labor. The ultimate reason for the interest in FECG signal analysis is in clinical diagnosis and biomedical applications. The extraction and detection of the FECG signal from composite abdominal signals with powerful and advance methodologies are becoming very important requirements in fetal monitoring. The purpose of this review paper is to illustrate the various methodologies and developed algorithms on FECG signal detection and analysis to provide efficient and effective ways of understanding the FECG signal and its nature for fetal monitoring. A comparative study has been carried out to show the performance and accuracy of various methods of FECG signal analysis for fetal monitoring. Finally, this paper further focused some of the hardware implementations using electrical signals for monitoring the fetal heart rate. This paper opens up a passage for researchers, physicians, and end users to advocate an excellent understanding of FECG signal and its analysis procedures for fetal heart rate monitoring system. PMID:19495912
Signal Processing for Determining Water Height in Steam Pipes with Dynamic Surface Conditions
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Lee, Hyeong Jae; Bar-Cohen, Yoseph
2015-01-01
An enhanced signal processing method based on the filtered Hilbert envelope of the auto-correlation function of the wave signal has been developed to monitor the height of condensed water through the steel wall of steam pipes with dynamic surface conditions. The developed signal processing algorithm can also be used to estimate the thickness of the pipe to determine the cut-off frequency for the low pass filter frequency of the Hilbert Envelope. Testing and analysis results by using the developed technique for dynamic surface conditions are presented. A multiple array of transducers setup and methodology are proposed for both the pulse-echo and pitch-catch signals to monitor the fluctuation of the water height due to disturbance, water flow, and other anomaly conditions.
Weld defect identification in friction stir welding using power spectral density
NASA Astrophysics Data System (ADS)
Das, Bipul; Pal, Sukhomay; Bag, Swarup
2018-04-01
Power spectral density estimates are powerful in extraction of useful information retained in signal. In the current research work classical periodogram and Welch periodogram algorithms are used for the estimation of power spectral density for vertical force signal and transverse force signal acquired during friction stir welding process. The estimated spectral densities reveal notable insight in identification of defects in friction stir welded samples. It was observed that higher spectral density against each process signals is a key indication in identifying the presence of possible internal defects in the welded samples. The developed methodology can offer preliminary information regarding presence of internal defects in friction stir welded samples can be best accepted as first level of safeguard in monitoring the friction stir welding process.
Techniques of EMG signal analysis: detection, processing, classification and applications
Hussain, M.S.; Mohd-Yasin, F.
2006-01-01
Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694
A CWT-based methodology for piston slap experimental characterization
NASA Astrophysics Data System (ADS)
Buzzoni, M.; Mucchi, E.; Dalpiaz, G.
2017-03-01
Noise and vibration control in mechanical systems has become ever more significant for automotive industry where the comfort of the passenger compartment represents a challenging issue for car manufacturers. The reduction of piston slap noise is pivotal for a good design of IC engines. In this scenario, a methodology has been developed for the vibro-acoustic assessment of IC diesel engines by means of design changes in piston to cylinder bore clearance. Vibration signals have been analysed by means of advanced signal processing techniques taking advantage of cyclostationarity theory. The procedure departs from the analysis of the Continuous Wavelet Transform (CWT) in order to identify a representative frequency band of piston slap phenomenon. Such a frequency band has been exploited as the input data in the further signal processing analysis that involves the envelope analysis of the second order cyclostationary component of the signal. The second order harmonic component has been used as the benchmark parameter of piston slap noise. An experimental procedure of vibrational benchmarking is proposed and verified at different operational conditions in real IC engines actually equipped on cars. This study clearly underlines the crucial role of the transducer positioning when differences among real piston-to-cylinder clearances are considered. In particular, the proposed methodology is effective for the sensors placed on the outer cylinder wall in all the tested conditions.
Ultrasonic sensor based defect detection and characterisation of ceramics.
Kesharaju, Manasa; Nagarajah, Romesh; Zhang, Tonzhua; Crouch, Ian
2014-01-01
Ceramic tiles, used in body armour systems, are currently inspected visually offline using an X-ray technique that is both time consuming and very expensive. The aim of this research is to develop a methodology to detect, locate and classify various manufacturing defects in Reaction Sintered Silicon Carbide (RSSC) ceramic tiles, using an ultrasonic sensing technique. Defects such as free silicon, un-sintered silicon carbide material and conventional porosity are often difficult to detect using conventional X-radiography. An alternative inspection system was developed to detect defects in ceramic components using an Artificial Neural Network (ANN) based signal processing technique. The inspection methodology proposed focuses on pre-processing of signals, de-noising, wavelet decomposition, feature extraction and post-processing of the signals for classification purposes. This research contributes to developing an on-line inspection system that would be far more cost effective than present methods and, moreover, assist manufacturers in checking the location of high density areas, defects and enable real time quality control, including the implementation of accept/reject criteria. Copyright © 2013 Elsevier B.V. All rights reserved.
Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C
2015-12-01
The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.
Clustering for unsupervised fault diagnosis in nuclear turbine shut-down transients
NASA Astrophysics Data System (ADS)
Baraldi, Piero; Di Maio, Francesco; Rigamonti, Marco; Zio, Enrico; Seraoui, Redouane
2015-06-01
Empirical methods for fault diagnosis usually entail a process of supervised training based on a set of examples of signal evolutions "labeled" with the corresponding, known classes of fault. However, in practice, the signals collected during plant operation may be, very often, "unlabeled", i.e., the information on the corresponding type of occurred fault is not available. To cope with this practical situation, in this paper we develop a methodology for the identification of transient signals showing similar characteristics, under the conjecture that operational/faulty transient conditions of the same type lead to similar behavior in the measured signals evolution. The methodology is founded on a feature extraction procedure, which feeds a spectral clustering technique, embedding the unsupervised fuzzy C-means (FCM) algorithm, which evaluates the functional similarity among the different operational/faulty transients. A procedure for validating the plausibility of the obtained clusters is also propounded based on physical considerations. The methodology is applied to a real industrial case, on the basis of 148 shut-down transients of a Nuclear Power Plant (NPP) steam turbine.
Methodology for fault detection in induction motors via sound and vibration signals
NASA Astrophysics Data System (ADS)
Delgado-Arredondo, Paulo Antonio; Morinigo-Sotelo, Daniel; Osornio-Rios, Roque Alfredo; Avina-Cervantes, Juan Gabriel; Rostro-Gonzalez, Horacio; Romero-Troncoso, Rene de Jesus
2017-01-01
Nowadays, timely maintenance of electric motors is vital to keep up the complex processes of industrial production. There are currently a variety of methodologies for fault diagnosis. Usually, the diagnosis is performed by analyzing current signals at a steady-state motor operation or during a start-up transient. This method is known as motor current signature analysis, which identifies frequencies associated with faults in the frequency domain or by the time-frequency decomposition of the current signals. Fault identification may also be possible by analyzing acoustic sound and vibration signals, which is useful because sometimes this information is the only available. The contribution of this work is a methodology for detecting faults in induction motors in steady-state operation based on the analysis of acoustic sound and vibration signals. This proposed approach uses the Complete Ensemble Empirical Mode Decomposition for decomposing the signal into several intrinsic mode functions. Subsequently, the frequency marginal of the Gabor representation is calculated to obtain the spectral content of the IMF in the frequency domain. This proposal provides good fault detectability results compared to other published works in addition to the identification of more frequencies associated with the faults. The faults diagnosed in this work are two broken rotor bars, mechanical unbalance and bearing defects.
Study of interhemispheric asymmetries in electroencephalographic signals by frequency analysis
NASA Astrophysics Data System (ADS)
Zapata, J. F.; Garzón, J.
2011-01-01
This study provides a new method for the detection of interhemispheric asymmetries in patients with continuous video-electroencephalography (EEG) monitoring at Intensive Care Unit (ICU), using wavelet energy. We obtained the registration of EEG signals in 42 patients with different pathologies, and then we proceeded to perform signal processing using the Matlab program, we compared the abnormalities recorded in the report by the neurophysiologist, the images of each patient and the result of signals analysis with the Discrete Wavelet Transform (DWT). Conclusions: there exists correspondence between the abnormalities found in the processing of the signal with the clinical reports of findings in patients; according to previous conclusion, the methodology used can be a useful tool for diagnosis and early quantitative detection of interhemispheric asymmetries.
Signal and noise extraction from analog memory elements for neuromorphic computing.
Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T
2018-05-29
Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.
An Overview Of Wideband Signal Analysis Techniques
NASA Astrophysics Data System (ADS)
Speiser, Jeffrey M.; Whitehouse, Harper J.
1989-11-01
This paper provides a unifying perspective for several narowband and wideband signal processing techniques. It considers narrowband ambiguity functions and Wigner-Ville distibutions, together with the wideband ambiguity function and several proposed approaches to a wideband version of the Wigner-Ville distribution (WVD). A unifying perspective is provided by the methodology of unitary representations and ray representations of transformation groups.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
DOT National Transportation Integrated Search
1969-08-01
This study concerned the rate of presentation of stimuli on a task involving the monitoring of a static process of the kind represented by aircraft warning light indicators. The task was performed concurrently with various combinations of tasks requi...
Warren, Megan R; Sangiamo, Daniel T; Neunuebel, Joshua P
2018-03-01
An integral component in the assessment of vocal behavior in groups of freely interacting animals is the ability to determine which animal is producing each vocal signal. This process is facilitated by using microphone arrays with multiple channels. Here, we made important refinements to a state-of-the-art microphone array based system used to localize vocal signals produced by freely interacting laboratory mice. Key changes to the system included increasing the number of microphones as well as refining the methodology for localizing and assigning vocal signals to individual mice. We systematically demonstrate that the improvements in the methodology for localizing mouse vocal signals led to an increase in the number of signals detected as well as the number of signals accurately assigned to an animal. These changes facilitated the acquisition of larger and more comprehensive data sets that better represent the vocal activity within an experiment. Furthermore, this system will allow more thorough analyses of the role that vocal signals play in social communication. We expect that such advances will broaden our understanding of social communication deficits in mouse models of neurological disorders. Copyright © 2018 Elsevier B.V. All rights reserved.
Schwaibold, M; Schöchlin, J; Bolz, A
2002-01-01
For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.
Brain-computer interface analysis of a dynamic visuo-motor task.
Logar, Vito; Belič, Aleš
2011-01-01
The area of brain-computer interfaces (BCIs) represents one of the more interesting fields in neurophysiological research, since it investigates the development of the machines that perform different transformations of the brain's "thoughts" to certain pre-defined actions. Experimental studies have reported some successful implementations of BCIs; however, much of the field still remains unexplored. According to some recent reports the phase coding of informational content is an important mechanism in the brain's function and cognition, and has the potential to explain various mechanisms of the brain's data transfer, but it has yet to be scrutinized in the context of brain-computer interface. Therefore, if the mechanism of phase coding is plausible, one should be able to extract the phase-coded content, carried by brain signals, using appropriate signal-processing methods. In our previous studies we have shown that by using a phase-demodulation-based signal-processing approach it is possible to decode some relevant information on the current motor action in the brain from electroencephalographic (EEG) data. In this paper the authors would like to present a continuation of their previous work on the brain-information-decoding analysis of visuo-motor (VM) tasks. The present study shows that EEG data measured during more complex, dynamic visuo-motor (dVM) tasks carries enough information about the currently performed motor action to be successfully extracted by using the appropriate signal-processing and identification methods. The aim of this paper is therefore to present a mathematical model, which by means of the EEG measurements as its inputs predicts the course of the wrist movements as applied by each subject during the task in simulated or real time (BCI analysis). However, several modifications to the existing methodology are needed to achieve optimal decoding results and a real-time, data-processing ability. The information extracted from the EEG could, therefore, be further used for the development of a closed-loop, non-invasive, brain-computer interface. For the case of this study two types of measurements were performed, i.e., the electroencephalographic (EEG) signals and the wrist movements were measured simultaneously, during the subject's performance of a dynamic visuo-motor task. Wrist-movement predictions were computed by using the EEG data-processing methodology of double brain-rhythm filtering, double phase demodulation and double principal component analyses (PCA), each with a separate set of parameters. For the movement-prediction model a fuzzy inference system was used. The results have shown that the EEG signals measured during the dVM tasks carry enough information about the subjects' wrist movements for them to be successfully decoded using the presented methodology. Reasonably high values of the correlation coefficients suggest that the validation of the proposed approach is satisfactory. Moreover, since the causality of the rhythm filtering and the PCA transformation has been achieved, we have shown that these methods can also be used in a real-time, brain-computer interface. The study revealed that using non-causal, optimized methods yields better prediction results in comparison with the causal, non-optimized methodology; however, taking into account that the causality of these methods allows real-time processing, the minor decrease in prediction quality is acceptable. The study suggests that the methodology that was proposed in our previous studies is also valid for identifying the EEG-coded content during dVM tasks, albeit with various modifications, which allow better prediction results and real-time data processing. The results have shown that wrist movements can be predicted in simulated or real time; however, the results of the non-causal, optimized methodology (simulated) are slightly better. Nevertheless, the study has revealed that these methods should be suitable for use in the development of a non-invasive, brain-computer interface. Copyright © 2010 Elsevier B.V. All rights reserved.
Imaging dynamic redox processes with genetically encoded probes.
Ezeriņa, Daria; Morgan, Bruce; Dick, Tobias P
2014-08-01
Redox signalling plays an important role in many aspects of physiology, including that of the cardiovascular system. Perturbed redox regulation has been associated with numerous pathological conditions; nevertheless, the causal relationships between redox changes and pathology often remain unclear. Redox signalling involves the production of specific redox species at specific times in specific locations. However, until recently, the study of these processes has been impeded by a lack of appropriate tools and methodologies that afford the necessary redox species specificity and spatiotemporal resolution. Recently developed genetically encoded fluorescent redox probes now allow dynamic real-time measurements, of defined redox species, with subcellular compartment resolution, in intact living cells. Here we discuss the available genetically encoded redox probes in terms of their sensitivity and specificity and highlight where uncertainties or controversies currently exist. Furthermore, we outline major goals for future probe development and describe how progress in imaging methodologies will improve our ability to employ genetically encoded redox probes in a wide range of situations. This article is part of a special issue entitled "Redox Signalling in the Cardiovascular System." Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek
1991-01-01
Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.
Understanding the impact of TV commercials: electrical neuroimaging.
Vecchiato, Giovanni; Kong, Wanzeng; Maglione, Anton Giulio; Wei, Daming
2012-01-01
Today, there is a greater interest in the marketing world in using neuroimaging tools to evaluate the efficacy of TV commercials. This field of research is known as neuromarketing. In this article, we illustrate some applications of electrical neuroimaging, a discipline that uses electroencephalography (EEG) and intensive signal processing techniques for the evaluation of marketing stimuli. We also show how the proper usage of these methodologies can provide information related to memorization and attention while people are watching marketing-relevant stimuli. We note that temporal and frequency patterns of EEG signals are able to provide possible descriptors that convey information about the cognitive process in subjects observing commercial advertisements (ads). Such information could be unobtainable through common tools used in standard marketing research. Evidence of this research shows how EEG methodologies could be employed to better design new products that marketers are going to promote and to analyze the global impact of video commercials already broadcast on TV.
Georgoulas, George; Georgopoulos, Voula C; Stylios, Chrysostomos D
2006-01-01
This paper proposes a novel integrated methodology to extract features and classify speech sounds with intent to detect the possible existence of a speech articulation disorder in a speaker. Articulation, in effect, is the specific and characteristic way that an individual produces the speech sounds. A methodology to process the speech signal, extract features and finally classify the signal and detect articulation problems in a speaker is presented. The use of support vector machines (SVMs), for the classification of speech sounds and detection of articulation disorders is introduced. The proposed method is implemented on a data set where different sets of features and different schemes of SVMs are tested leading to satisfactory performance.
Myoelectric signal processing for control of powered limb prostheses.
Parker, P; Englehart, K; Hudgins, B
2006-12-01
Progress in myoelectric control technology has over the years been incremental, due in part to the alternating focus of the R&D between control methodology and device hardware. The technology has over the past 50 years or so moved from single muscle control of a single prosthesis function to muscle group activity control of multifunction prostheses. Central to these changes have been developments in the means of extracting information from the myoelectric signal. This paper gives an overview of the myoelectric signal processing challenge, a brief look at the challenge from an historical perspective, the state-of-the-art in myoelectric signal processing for prosthesis control, and an indication of where this field is heading. The paper demonstrates that considerable progress has been made in providing clients with useful and reliable myoelectric communication channels, and that exciting work and developments are on the horizon.
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-01-01
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox. PMID:27827831
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals.
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-11-02
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox.
Raut, Savita V; Yadav, Dinkar M
2018-03-28
This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.
Simultaneous excitation system for efficient guided wave structural health monitoring
NASA Astrophysics Data System (ADS)
Hua, Jiadong; Michaels, Jennifer E.; Chen, Xin; Lin, Jing
2017-10-01
Many structural health monitoring systems utilize guided wave transducer arrays for defect detection and localization. Signals are usually acquired using the ;pitch-catch; method whereby each transducer is excited in turn and the response is received by the remaining transducers. When extensive signal averaging is performed, the data acquisition process can be quite time-consuming, especially for metallic components that require a low repetition rate to allow signals to die out. Such a long data acquisition time is particularly problematic if environmental and operational conditions are changing while data are being acquired. To reduce the total data acquisition time, proposed here is a methodology whereby multiple transmitters are simultaneously triggered, and each transmitter is driven with a unique excitation. The simultaneously transmitted waves are captured by one or more receivers, and their responses are processed by dispersion-compensated filtering to extract the response from each individual transmitter. The excitation sequences are constructed by concatenating a series of chirps whose start and stop frequencies are randomly selected from a specified range. The process is optimized using a Monte-Carlo approach to select sequences with impulse-like autocorrelations and relatively flat cross-correlations. The efficacy of the proposed methodology is evaluated by several metrics and is experimentally demonstrated with sparse array imaging of simulated damage.
Review of current GPS methodologies for producing accurate time series and their error sources
NASA Astrophysics Data System (ADS)
He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping
2017-05-01
The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.
Ortega-Martorell, Sandra; Ruiz, Héctor; Vellido, Alfredo; Olier, Iván; Romero, Enrique; Julià-Sapé, Margarida; Martín, José D.; Jarman, Ian H.; Arús, Carles; Lisboa, Paulo J. G.
2013-01-01
Background The clinical investigation of human brain tumors often starts with a non-invasive imaging study, providing information about the tumor extent and location, but little insight into the biochemistry of the analyzed tissue. Magnetic Resonance Spectroscopy can complement imaging by supplying a metabolic fingerprint of the tissue. This study analyzes single-voxel magnetic resonance spectra, which represent signal information in the frequency domain. Given that a single voxel may contain a heterogeneous mix of tissues, signal source identification is a relevant challenge for the problem of tumor type classification from the spectroscopic signal. Methodology/Principal Findings Non-negative matrix factorization techniques have recently shown their potential for the identification of meaningful sources from brain tissue spectroscopy data. In this study, we use a convex variant of these methods that is capable of handling negatively-valued data and generating sources that can be interpreted as tumor class prototypes. A novel approach to convex non-negative matrix factorization is proposed, in which prior knowledge about class information is utilized in model optimization. Class-specific information is integrated into this semi-supervised process by setting the metric of a latent variable space where the matrix factorization is carried out. The reported experimental study comprises 196 cases from different tumor types drawn from two international, multi-center databases. The results indicate that the proposed approach outperforms a purely unsupervised process by achieving near perfect correlation of the extracted sources with the mean spectra of the tumor types. It also improves tissue type classification. Conclusions/Significance We show that source extraction by unsupervised matrix factorization benefits from the integration of the available class information, so operating in a semi-supervised learning manner, for discriminative source identification and brain tumor labeling from single-voxel spectroscopy data. We are confident that the proposed methodology has wider applicability for biomedical signal processing. PMID:24376744
Review of Vibration-Based Helicopters Health and Usage Monitoring Methods
2001-04-05
FM4, NA4, NA4*, NB4 and NB48* (Polyshchuk et al., 1998). The Wigner - Ville distribution ( WVD ) is a joint time-frequency signal analysis. The WVD is one...signal processing methodologies that are of relevance to vibration based damage detection (e.g., Wavelet Transform and Wigner - Ville distribution ) will be...operation cost, reduce maintenance flights, and increase flight safety. Key Words: HUMS; Wavelet Transform; Wigner - Ville distribution ; O&S; Machinery
Wallace, Nathan D; Ceguerra, Anna V; Breen, Andrew J; Ringer, Simon P
2018-06-01
Atom probe tomography is a powerful microscopy technique capable of reconstructing the 3D position and chemical identity of millions of atoms within engineering materials, at the atomic level. Crystallographic information contained within the data is particularly valuable for the purposes of reconstruction calibration and grain boundary analysis. Typically, analysing this data is a manual, time-consuming and error prone process. In many cases, the crystallographic signal is so weak that it is difficult to detect at all. In this study, a new automated signal processing methodology is demonstrated. We use the affine properties of the detector coordinate space, or the 'detector stack', as the basis for our calculations. The methodological framework and the visualisation tools are shown to be superior to the standard method of crystallographic pole visualisation directly from field evaporation images and there is no requirement for iterations between a full real-space initial tomographic reconstruction and the detector stack. The mapping approaches are demonstrated for aluminium, tungsten, magnesium and molybdenum. Implications for reconstruction calibration, accuracy of crystallographic measurements, reliability and repeatability are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
NASA Astrophysics Data System (ADS)
Larnier, H.; Sailhac, P.; Chambodut, A.
2018-01-01
Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.
Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review.
Kamran, Muhammad A; Mannan, Malik M Naeem; Jeong, Myung Yung
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality that measures the concentration changes of oxy-hemoglobin (HbO) and de-oxy hemoglobin (HbR) at the same time. It is an emerging cortical imaging modality with a good temporal resolution that is acceptable for brain-computer interface applications. Researchers have developed several methods in last two decades to extract the neuronal activation related waveform from the observed fNIRS time series. But still there is no standard method for analysis of fNIRS data. This article presents a brief review of existing methodologies to model and analyze the activation signal. The purpose of this review article is to give a general overview of variety of existing methodologies to extract useful information from measured fNIRS data including pre-processing steps, effects of differential path length factor (DPF), variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises and resting/activation state functional connectivity. Finally, the challenges in the analysis of fNIRS signal are summarized.
Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review
Kamran, Muhammad A.; Mannan, Malik M. Naeem; Jeong, Myung Yung
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality that measures the concentration changes of oxy-hemoglobin (HbO) and de-oxy hemoglobin (HbR) at the same time. It is an emerging cortical imaging modality with a good temporal resolution that is acceptable for brain-computer interface applications. Researchers have developed several methods in last two decades to extract the neuronal activation related waveform from the observed fNIRS time series. But still there is no standard method for analysis of fNIRS data. This article presents a brief review of existing methodologies to model and analyze the activation signal. The purpose of this review article is to give a general overview of variety of existing methodologies to extract useful information from measured fNIRS data including pre-processing steps, effects of differential path length factor (DPF), variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises and resting/activation state functional connectivity. Finally, the challenges in the analysis of fNIRS signal are summarized. PMID:27375458
A Generalizable Methodology for Quantifying User Satisfaction
NASA Astrophysics Data System (ADS)
Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung
Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.
Research in Stochastic Processes and their Applications
1993-01-01
goal is to learn how Gaussian and linear signal processing methodologies should be adapted to deal with non-Gaussian regimes. Part III continues the... smoothi fmictions in /I, ami we have a chain C ... C tir C ... C /I’) C 11_ C ... C 1t_, C_ ... C ¢’, 10 4o = fH,; H =H;, H, (Hilbert space). 4ý is a Fr
Wavelet methodology to improve single unit isolation in primary motor cortex cells
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.
2016-01-01
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
Advanced Quantitative Measurement Methodology in Physics Education Research
ERIC Educational Resources Information Center
Wang, Jing
2009-01-01
The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and…
Classroom Social Signal Analysis
ERIC Educational Resources Information Center
Raca, Mirko; Dillenbourg, Pierre
2014-01-01
We present our efforts towards building an observational system for measuring classroom activity. The goal is to explore visual cues which can be acquired with a system of video cameras and automatically processed to enrich the teacher's perception of the audience. The paper will give a brief overview of our methodology, explored features, and…
Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty
NASA Astrophysics Data System (ADS)
Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang
2016-12-01
Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.
Extracellular Electrophysiological Measurements of Cooperative Signals in Astrocytes Populations
Mestre, Ana L. G.; Inácio, Pedro M. C.; Elamine, Youssef; Asgarifar, Sanaz; Lourenço, Ana S.; Cristiano, Maria L. S.; Aguiar, Paulo; Medeiros, Maria C. R.; Araújo, Inês M.; Ventura, João; Gomes, Henrique L.
2017-01-01
Astrocytes are neuroglial cells that exhibit functional electrical properties sensitive to neuronal activity and capable of modulating neurotransmission. Thus, electrophysiological recordings of astroglial activity are very attractive to study the dynamics of glial signaling. This contribution reports on the use of ultra-sensitive planar electrodes combined with low noise and low frequency amplifiers that enable the detection of extracellular signals produced by primary cultures of astrocytes isolated from mouse cerebral cortex. Recorded activity is characterized by spontaneous bursts comprised of discrete signals with pronounced changes on the signal rate and amplitude. Weak and sporadic signals become synchronized and evolve with time to higher amplitude signals with a quasi-periodic behavior, revealing a cooperative signaling process. The methodology presented herewith enables the study of ionic fluctuations of population of cells, complementing the single cells observation by calcium imaging as well as by patch-clamp techniques. PMID:29109679
Extracellular Electrophysiological Measurements of Cooperative Signals in Astrocytes Populations.
Mestre, Ana L G; Inácio, Pedro M C; Elamine, Youssef; Asgarifar, Sanaz; Lourenço, Ana S; Cristiano, Maria L S; Aguiar, Paulo; Medeiros, Maria C R; Araújo, Inês M; Ventura, João; Gomes, Henrique L
2017-01-01
Astrocytes are neuroglial cells that exhibit functional electrical properties sensitive to neuronal activity and capable of modulating neurotransmission. Thus, electrophysiological recordings of astroglial activity are very attractive to study the dynamics of glial signaling. This contribution reports on the use of ultra-sensitive planar electrodes combined with low noise and low frequency amplifiers that enable the detection of extracellular signals produced by primary cultures of astrocytes isolated from mouse cerebral cortex. Recorded activity is characterized by spontaneous bursts comprised of discrete signals with pronounced changes on the signal rate and amplitude. Weak and sporadic signals become synchronized and evolve with time to higher amplitude signals with a quasi-periodic behavior, revealing a cooperative signaling process. The methodology presented herewith enables the study of ionic fluctuations of population of cells, complementing the single cells observation by calcium imaging as well as by patch-clamp techniques.
Pfeifer, Mischa D; Scholkmann, Felix; Labruyère, Rob
2017-01-01
Even though research in the field of functional near-infrared spectroscopy (fNIRS) has been performed for more than 20 years, consensus on signal processing methods is still lacking. A significant knowledge gap exists between established researchers and those entering the field. One major issue regularly observed in publications from researchers new to the field is the failure to consider possible signal contamination by hemodynamic changes unrelated to neurovascular coupling (i.e., scalp blood flow and systemic blood flow). This might be due to the fact that these researchers use the signal processing methods provided by the manufacturers of their measurement device without an advanced understanding of the performed steps. The aim of the present study was to investigate how different signal processing approaches (including and excluding approaches that partially correct for the possible signal contamination) affect the results of a typical functional neuroimaging study performed with fNIRS. In particular, we evaluated one standard signal processing method provided by a commercial company and compared it to three customized approaches. We thereby investigated the influence of the chosen method on the statistical outcome of a clinical data set (task-evoked motor cortex activity). No short-channels were used in the present study and therefore two types of multi-channel corrections based on multiple long-channels were applied. The choice of the signal processing method had a considerable influence on the outcome of the study. While methods that ignored the contamination of the fNIRS signals by task-evoked physiological noise yielded several significant hemodynamic responses over the whole head, the statistical significance of these findings disappeared when accounting for part of the contamination using a multi-channel regression. We conclude that adopting signal processing methods that correct for physiological confounding effects might yield more realistic results in cases where multi-distance measurements are not possible. Furthermore, we recommend using manufacturers' standard signal processing methods only in case the user has an advanced understanding of every signal processing step performed.
REVIEW ARTICLE: Spectrophotometric applications of digital signal processing
NASA Astrophysics Data System (ADS)
Morawski, Roman Z.
2006-09-01
Spectrophotometry is more and more often the method of choice not only in analysis of (bio)chemical substances, but also in the identification of physical properties of various objects and their classification. The applications of spectrophotometry include such diversified tasks as monitoring of optical telecommunications links, assessment of eating quality of food, forensic classification of papers, biometric identification of individuals, detection of insect infestation of seeds and classification of textiles. In all those applications, large numbers of data, generated by spectrophotometers, are processed by various digital means in order to extract measurement information. The main objective of this paper is to review the state-of-the-art methodology for digital signal processing (DSP) when applied to data provided by spectrophotometric transducers and spectrophotometers. First, a general methodology of DSP applications in spectrophotometry, based on DSP-oriented models of spectrophotometric data, is outlined. Then, the most important classes of DSP methods for processing spectrophotometric data—the methods for DSP-aided calibration of spectrophotometric instrumentation, the methods for the estimation of spectra on the basis of spectrophotometric data, the methods for the estimation of spectrum-related measurands on the basis of spectrophotometric data—are presented. Finally, the methods for preprocessing and postprocessing of spectrophotometric data are overviewed. Throughout the review, the applications of DSP are illustrated with numerous examples related to broadly understood spectrophotometry.
Single-Molecule Imaging of Cellular Signaling
NASA Astrophysics Data System (ADS)
De Keijzer, Sandra; Snaar-Jagalska, B. Ewa; Spaink, Herman P.; Schmidt, Thomas
Single-molecule microscopy is an emerging technique to understand the function of a protein in the context of its natural environment. In our laboratory this technique has been used to study the dynamics of signal transduction in vivo. A multitude of signal transduction cascades are initiated by interactions between proteins in the plasma membrane. These cascades start by binding a ligand to its receptor, thereby activating downstream signaling pathways which finally result in complex cellular responses. To fully understand these processes it is important to study the initial steps of the signaling cascades. Standard biological assays mostly call for overexpression of the proteins and high concentrations of ligand. This sets severe limits to the interpretation of, for instance, the time-course of the observations, given the large temporal spread caused by the diffusion-limited binding processes. Methods and limitations of single-molecule microscopy for the study of cell signaling are discussed on the example of the chemotactic signaling of the slime-mold Dictyostelium discoideum. Single-molecule studies, as reviewed in this chapter, appear to be one of the essential methodologies for the full spatiotemporal clarification of cellular signaling, one of the ultimate goals in cell biology.
A new algorithm for epilepsy seizure onset detection and spread estimation from EEG signals
NASA Astrophysics Data System (ADS)
Quintero-Rincón, Antonio; Pereyra, Marcelo; D'Giano, Carlos; Batatia, Hadj; Risk, Marcelo
2016-04-01
Appropriate diagnosis and treatment of epilepsy is a main public health issue. Patients suffering from this disease often exhibit different physical characterizations, which result from the synchronous and excessive discharge of a group of neurons in the cerebral cortex. Extracting this information using EEG signals is an important problem in biomedical signal processing. In this work we propose a new algorithm for seizure onset detection and spread estimation in epilepsy patients. The algorithm is based on a multilevel 1-D wavelet decomposition that captures the physiological brain frequency signals coupled with a generalized gaussian model. Preliminary experiments with signals from 30 epilepsy crisis and 11 subjects, suggest that the proposed methodology is a powerful tool for detecting the onset of epilepsy seizures with his spread across the brain.
Fabrication of novel plasmonics-active substrates
NASA Astrophysics Data System (ADS)
Dhawan, Anuj; Gerhold, Michael; Du, Yan; Misra, Veena; Vo-Dinh, Tuan
2009-02-01
This paper describes methodologies for fabricating of highly efficient plasmonics-active SERS substrates - having metallic nanowire structures with pointed geometries and sub-5 nm gap between the metallic nanowires enabling concentration of high EM fields in these regions - on a wafer-scale by a reproducible process that is compatible with large-scale development of these substrates. Excitation of surface plasmons in these nanowire structures leads to substantial enhancement in the Raman scattering signal obtained from molecules lying in the vicinity of the nanostructure surface. The methodologies employed included metallic coating of silicon nanowires fabricated by employing deep UV lithography as well as controlled growth of silicon germanium on silicon nanostructures to form diamond-shaped nanowire structures followed by metallic coating. These SERS substrates were employed for detecting chemical and biological molecules of interest. In order to characterize the SERS substrates developed in this work, we obtained SERS signals from molecules such as p-mercaptobenzoic acid (pMBA) and cresyl fast violet (CFV) attached to or adsorbed on the metal-coated SERS substrates. It was observed that both gold-coated triangular shaped nanowire substrates as well as gold-coated diamond shaped nanowire substrates provided very high SERS signals for the nanowires having sub-15 nm gaps and that the SERS signal depends on the closest spacing between the metal-coated silicon and silicon germanium nanowires. SERS substrates developed by the different processes were also employed for detection of biological molecules such as DPA (Dipicolinic Acid), an excellent marker for spores of bacteria such as Anthrax.
Unsupervised pattern recognition methods in ciders profiling based on GCE voltammetric signals.
Jakubowska, Małgorzata; Sordoń, Wanda; Ciepiela, Filip
2016-07-15
This work presents a complete methodology of distinguishing between different brands of cider and ageing degrees, based on voltammetric signals, utilizing dedicated data preprocessing procedures and unsupervised multivariate analysis. It was demonstrated that voltammograms recorded on glassy carbon electrode in Britton-Robinson buffer at pH 2 are reproducible for each brand. By application of clustering algorithms and principal component analysis visible homogenous clusters were obtained. Advanced signal processing strategy which included automatic baseline correction, interval scaling and continuous wavelet transform with dedicated mother wavelet, was a key step in the correct recognition of the objects. The results show that voltammetry combined with optimized univariate and multivariate data processing is a sufficient tool to distinguish between ciders from various brands and to evaluate their freshness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring Listeners' Real-Time Reactions to Regional Accents
ERIC Educational Resources Information Center
Watson, Kevin; Clark, Lynn
2015-01-01
Evaluative reactions to language stimuli are presumably dynamic events, constantly changing through time as the signal unfolds, yet the tools we usually use to capture these reactions provide us with only a snapshot of this process by recording reactions at a single point in time. This paper outlines and evaluates a new methodology which employs…
ERIC Educational Resources Information Center
Quintans, C.; Colmenar, A.; Castro, M.; Moure, M. J.; Mandado, E.
2010-01-01
ADCs (analog-to-digital converters), especially Pipeline and Sigma-Delta converters, are designed using complex architectures in order to increase their sampling rate and/or resolution. Consequently, the learning of ADC devices also encompasses complex concepts such as multistage synchronization, latency, oversampling, modulation, noise shaping,…
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Bar-Cohen, Yoseph; Lee, Hyeong Jae; Takano, Nobuyuki; Bao, Xiaoqi
2013-01-01
An advanced signal processing methodology is being developed to monitor the height of condensed water thru the wall of a steel pipe while operating at temperatures as high as 250deg. Using existing techniques, previous study indicated that, when the water height is low or there is disturbance in the environment, the predicted water height may not be accurate. In recent years, the use of the autocorrelation and envelope techniques in the signal processing has been demonstrated to be a very useful tool for practical applications. In this paper, various signal processing techniques including the auto correlation, Hilbert transform, and the Shannon Energy Envelope methods were studied and implemented to determine the water height in the steam pipe. The results have shown that the developed method provides a good capability for monitoring the height in the regular conditions. An alternative solution for shallow water or no water conditions based on a developed hybrid method based on Hilbert transform (HT) with a high pass filter and using the optimized windowing technique is suggested. Further development of the reported methods would provide a powerful tool for the identification of the disturbances of water height inside the pipe.
Challenges and Opportunities for Harmonizing Research Methodology: Raw Accelerometry.
van Hees, Vincent T; Thaler-Kall, Kathrin; Wolf, Klaus-Hendrik; Brønd, Jan C; Bonomi, Alberto; Schulze, Mareike; Vigl, Matthäus; Morseth, Bente; Hopstock, Laila Arnesdatter; Gorzelniak, Lukas; Schulz, Holger; Brage, Søren; Horsch, Alexander
2016-12-07
Raw accelerometry is increasingly being used in physical activity research, but diversity in sensor design, attachment and signal processing challenges the comparability of research results. Therefore, efforts are needed to harmonize the methodology. In this article we reflect on how increased methodological harmonization may be achieved. The authors of this work convened for a two-day workshop (March 2014) themed on methodological harmonization of raw accelerometry. The discussions at the workshop were used as a basis for this review. Key stakeholders were identified as manufacturers, method developers, method users (application), publishers, and funders. To facilitate methodological harmonization in raw accelerometry the following action points were proposed: i) Manufacturers are encouraged to provide a detailed specification of their sensors, ii) Each fundamental step of algorithms for processing raw accelerometer data should be documented, and ideally also motivated, to facilitate interpretation and discussion, iii) Algorithm developers and method users should be open about uncertainties in the description of data and the uncertainty of the inference itself, iv) All new algorithms which are pitched as "ready for implementation" should be shared with the community to facilitate replication and ongoing evaluation by independent groups, and v) A dynamic interaction between method stakeholders should be encouraged to facilitate a well-informed harmonization process. The workshop led to the identification of a number of opportunities for harmonizing methodological practice. The discussion as well as the practical checklists proposed in this review should provide guidance for stakeholders on how to contribute to increased harmonization.
Cardiac arrhythmia beat classification using DOST and PSO tuned SVM.
Raj, Sandeep; Ray, Kailash Chandra; Shankar, Om
2016-11-01
The increase in the number of deaths due to cardiovascular diseases (CVDs) has gained significant attention from the study of electrocardiogram (ECG) signals. These ECG signals are studied by the experienced cardiologist for accurate and proper diagnosis, but it becomes difficult and time-consuming for long-term recordings. Various signal processing techniques are studied to analyze the ECG signal, but they bear limitations due to the non-stationary behavior of ECG signals. Hence, this study aims to improve the classification accuracy rate and provide an automated diagnostic solution for the detection of cardiac arrhythmias. The proposed methodology consists of four stages, i.e. filtering, R-peak detection, feature extraction and classification stages. In this study, Wavelet based approach is used to filter the raw ECG signal, whereas Pan-Tompkins algorithm is used for detecting the R-peak inside the ECG signal. In the feature extraction stage, discrete orthogonal Stockwell transform (DOST) approach is presented for an efficient time-frequency representation (i.e. morphological descriptors) of a time domain signal and retains the absolute phase information to distinguish the various non-stationary behavior ECG signals. Moreover, these morphological descriptors are further reduced in lower dimensional space by using principal component analysis and combined with the dynamic features (i.e based on RR-interval of the ECG signals) of the input signal. This combination of two different kinds of descriptors represents each feature set of an input signal that is utilized for classification into subsequent categories by employing PSO tuned support vector machines (SVM). The proposed methodology is validated on the baseline MIT-BIH arrhythmia database and evaluated under two assessment schemes, yielding an improved overall accuracy of 99.18% for sixteen classes in the category-based and 89.10% for five classes (mapped according to AAMI standard) in the patient-based assessment scheme respectively to the state-of-art diagnosis. The results reported are further compared to the existing methodologies in literature. The proposed feature representation of cardiac signals based on symmetrical features along with PSO based optimization technique for the SVM classifier reported an improved classification accuracy in both the assessment schemes evaluated on the benchmark MIT-BIH arrhythmia database and hence can be utilized for automated computer-aided diagnosis of cardiac arrhythmia beats. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
On a Chirplet Transform Based Method for Co-channel Voice Separation
NASA Astrophysics Data System (ADS)
Dugnol, B.; Fernández, C.; Galiano, G.; Velasco, J.
We use signal and image theory based algorithms to produce estimations of the number of wolves emitting howls or barks in a given field recording as an individuals counting alternative to the traditional trace collecting methodologies. We proceed in two steps. Firstly, we clean and enhance the signal by using PDE based image processing algorithms applied to the signal spectrogram. Secondly, assuming that the wolves chorus may be modelled as an addition of nonlinear chirps, we use the quadratic energy distribution corresponding to the Chirplet Transform of the signal to produce estimates of the corresponding instantaneous frequencies, chirp-rates and amplitudes at each instant of the recording. We finally establish suitable criteria to decide how such estimates are connected in time.
Szaciłowski, Konrad
2007-01-01
Analogies between photoactive nitric oxide generators and various electronic devices: logic gates and operational amplifiers are presented. These analogies have important biological consequences: application of control parameters allows for better targeting and control of nitric oxide drugs. The same methodology may be applied in the future for other therapeutic strategies and at the same time helps to understand natural regulatory and signaling processes in biological systems.
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
Psycho-physiological training approach for amputee rehabilitation.
Dhal, Chandan; Wahi, Akshat
2015-01-01
Electromyography (EMG) signals are very noisy and difficult to acquire. Conventional techniques involve amplification and filtering through analog circuits, which makes the system very unstable. The surface EMG signals lie in the frequency range of 6Hz to 600Hz, and the dominant range is between the ranges from 20Hz to 150Hz. 1 Our project aimed to analyze an EMG signal effectively over its complete frequency range. To remove these defects, we designed what we think is an easy, effective, and reliable signal processing technique. We did spectrum analysis, so as to perform all the processing such as amplification, filtering, and thresholding on an Arduino Uno board, hence removing the need for analog amplifiers and filtering circuits, which have stability issues. The conversion of time domain to frequency domain of any signal gives a detailed data of the signal set. Our main aim is to use this useful data for an alternative methodology for rehabilitation called a psychophysiological approach to rehabilitation in prosthesis, which can reduce the cost of the myoelectric arm, as well as increase its efficiency. This method allows the user to gain control over their muscle sets in a less stressful environment. Further, we also have described how our approach is viable and can benefit the rehabilitation process. We used our DSP EMG signals to play an online game and showed how this approach can be used in rehabilitation.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Lewis, George K; Lewis, George K; Olbricht, William
2008-01-01
This paper explains the circuitry and signal processing to perform electrical impedance spectroscopy on piezoelectric materials and ultrasound transducers. Here, we measure and compare the impedance spectra of 2−5 MHz piezoelectrics, but the methodology applies for 700 kHz–20 MHz ultrasonic devices as well. Using a 12 ns wide 5 volt pulsing circuit as an impulse, we determine the electrical impedance curves experimentally using Ohm's law and fast Fourier transform (FFT), and compare results with mathematical models. The method allows for rapid impedance measurement for a range of frequencies using a narrow input pulse, digital oscilloscope and FFT techniques. The technique compares well to current methodologies such as network and impedance analyzers while providing additional versatility in the electrical impedance measurement. The technique is theoretically simple, easy to implement and completed with ordinary laboratory instrumentation for minimal cost. PMID:19081773
Wavelet methodology to improve single unit isolation in primary motor cortex cells.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2015-05-15
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.
A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing
2014-03-01
possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of
Functional relationship-based alarm processing
Corsberg, D.R.
1987-04-13
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). 11 figs.
Research and development of an electrochemical biocide reactor
NASA Technical Reports Server (NTRS)
See, G. G.; Bodo, C. A.; Glennon, J. P.
1975-01-01
An alternate disinfecting process to chemical agents, heat, or radiation in an aqueous media has been studied. The process is called an electrochemical biocide and employs cyclic, low-level voltages at chemically inert electrodes to pass alternating current through water and, in the process, to destroy microorganisms. The paper describes experimental hardware, methodology, and results with a tracer microorganism (Escherichia coli). The results presented show the effects on microorganism kill of operating parameters, including current density (15 to 55 mA/sq cm (14 to 51 ASF)), waveform of applied electrical signal (square, triangular, sine), frequency of applied electrical signal (0.5 to 1.5 Hz), process water flow rate (100 to 600 cc/min (1.6 to 9.5 gph)), and reactor resident time (0 to 4 min). Comparisons are made between the disinfecting property of the electrochemical biocide and chlorine, bromine, and iodine.
Dual sensitivity mode system for monitoring processes and sensors
Wilks, Alan D.; Wegerich, Stephan W.; Gross, Kenneth C.
2000-01-01
A method and system for analyzing a source of data. The system and method involves initially training a system using a selected data signal, calculating at least two levels of sensitivity using a pattern recognition methodology, activating a first mode of alarm sensitivity to monitor the data source, activating a second mode of alarm sensitivity to monitor the data source and generating a first alarm signal upon the first mode of sensitivity detecting an alarm condition and a second alarm signal upon the second mode of sensitivity detecting an associated alarm condition. The first alarm condition and second alarm condition can be acted upon by an operator and/or analyzed by a specialist or computer program.
A robust color signal processing with wide dynamic range WRGB CMOS image sensor
NASA Astrophysics Data System (ADS)
Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi
2011-01-01
We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.
Anticipatory Reward Processing in Addicted Populations: A Focus on the Monetary Incentive Delay Task
Balodis, Iris M.; Potenza, Marc N.
2014-01-01
Advances in brain imaging techniques have allowed neurobiological research to temporally analyze signals coding for the anticipation of rewards. In addicted populations, both hypo- and hyper-responsiveness of brain regions (e.g., ventral striatum) implicated in drug effects and reward system processing have been reported during anticipation of generalized reward. Here, we discuss the current state of knowledge of reward processing in addictive disorders from a widely used and validated task: the Monetary Incentive Delay Task (MIDT). The current paper constrains review to those studies applying the MIDT in addicted and at-risk adult populations, with a focus on anticipatory processing and striatal regions activated during task performance, as well as the relationship of these regions with individual difference (e.g., impulsivity) and treatment outcome variables. We further review drug influences in challenge studies as a means to examine acute influences on reward processing in abstinent, recreationally using and addicted populations. Here, we discuss that generalized reward processing in addicted and at-risk populations is often characterized by divergent anticipatory signaling in the ventral striatum. Although methodological/task variations may underlie some discrepant findings, anticipatory signaling in the ventral striatum may also be influenced by smoking status, drug metabolites and treatment status in addicted populations. Divergent results across abstinent, recreationally using and addicted populations demonstrate complexities in interpreting findings. Future studies will benefit from focusing on characterizing how impulsivity and other addiction-related features relate to anticipatory striatal signaling over time. Additionally, identifying how anticipatory signals recover/adjust following protracted abstinence will be important in understanding recovery processes. PMID:25481621
Santos, Sara; Almeida, Inês; Oliveiros, Bárbara; Castelo-Branco, Miguel
2016-01-01
Faces play a key role in signaling social cues such as signals of trustworthiness. Although several studies identify the amygdala as a core brain region in social cognition, quantitative approaches evaluating its role are scarce. This review aimed to assess the role of the amygdala in the processing of facial trustworthiness, by analyzing its amplitude BOLD response polarity to untrustworthy versus trustworthy facial signals under fMRI tasks through a Meta-analysis of effect sizes (MA). Activation Likelihood Estimation (ALE) analyses were also conducted. Articles were retrieved from MEDLINE, ScienceDirect and Web-of-Science in January 2016. Following the PRISMA statement guidelines, a systematic review of original research articles in English language using the search string "(face OR facial) AND (trustworthiness OR trustworthy OR untrustworthy OR trustee) AND fMRI" was conducted. The MA concerned amygdala responses to facial trustworthiness for the contrast Untrustworthy vs. trustworthy faces, and included whole-brain and ROI studies. To prevent potential bias, results were considered even when at the single study level they did not survive correction for multiple comparisons or provided non-significant results. ALE considered whole-brain studies, using the same methodology to prevent bias. A summary of the methodological options (design and analysis) described in the articles was finally used to get further insight into the characteristics of the studies and to perform a subgroup analysis. Data were extracted by two authors and checked independently. Twenty fMRI studies were considered for systematic review. An MA of effect sizes with 11 articles (12 studies) showed high heterogeneity between studies [Q(11) = 265.68, p < .0001; I2 = 95.86%, 94.20% to 97.05%, with 95% confidence interval, CI]. Random effects analysis [RE(183) = 0.851, .422 to .969, 95% CI] supported the evidence that the (right) amygdala responds preferentially to untrustworthy faces. Moreover, two ALE analyses performed with 6 articles (7 studies) identified the amygdala, insula and medial dorsal nuclei of thalamus as structures with negative correlation with trustworthiness. Six articles/studies showed that posterior cingulate and medial frontal gyrus present positive correlations with increasing facial trustworthiness levels. Significant effects considering subgroup analysis based on methodological criteria were found for experiments using spatial smoothing, categorization of trustworthiness in 2 or 3 categories and paradigms which involve both explicit and implicit tasks. Significant heterogeneity between studies was found in MA, which might have arisen from inclusion of studies with smaller sample sizes and differences in methodological options. Studies using ROI analysis / small volume correction methods were more often devoted specifically to the amygdala region, with some results reporting uncorrected p-values based on mainly clinical a priori evidence of amygdala involvement in these processes. Nevertheless, we did not find significant evidence for publication bias. Our results support the role of amygdala in facial trustworthiness judgment, emphasizing its predominant role during processing of negative social signals in (untrustworthy) faces. This systematic review suggests that little consistency exists among studies' methodology, and that larger sample sizes should be preferred.
Ambient Noise Correlation Amplitudes and Local Site Response
NASA Astrophysics Data System (ADS)
Bowden, D. C.; Tsai, V. C.; Lin, F. C.
2014-12-01
We investigate amplitudes from ambient noise cross correlations in a spatially dense array. Our study of wave propagation effects and ambient noise is focused on the Long Beach Array, with more than 5000 single component geophones in an area of about 100 square kilometers, providing high resolution imaging of shallow crustal features. The method allows for observations of ground properties like site response and attenuation, which can well supplement traditional velocity models and simulations for seismic hazard. Traditional ambient noise cross correlations have proven to be an effective means of measuring velocity information about surface waves, but the amplitudes of such signals in traditional processing are often distorted. We discuss a method of signal processing which preserves relative amplitudes of signals within an array, and the subsequent processing to track wave motion across the array. Previous work showed promising correlation to known local structure, but did not represent a thorough application of tomographic methods. Now we extend the methodology to more robustly consider wavefront focusing and defocusing, interference with higher modes, and discuss the differing effects of local site response, attenuation, and scattering. Application of Helmholtz tomography and determination of local site amplification has previously been demonstrated using earthquake data on the continental scale with USArray, but the exploitation of the ambient noise field is required both for the higher frequencies needed by seismic hazard studies and for the short deployment time of a Long Beach scale array. We outline both the successes and shortcomings of the methodology, and show how it can be extended for use on future arrays.
A Soft Sensor for Bioprocess Control Based on Sequential Filtering of Metabolic Heat Signals
Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik
2014-01-01
Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel. PMID:25264951
A soft sensor for bioprocess control based on sequential filtering of metabolic heat signals.
Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik
2014-09-26
Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Anomaly Detection of Electromyographic Signals.
Ijaz, Ahsan; Choi, Jongeun
2018-04-01
In this paper, we provide a robust framework to detect anomalous electromyographic (EMG) signals and identify contamination types. As a first step for feature selection, optimally selected Lawton wavelets transform is applied. Robust principal component analysis (rPCA) is then performed on these wavelet coefficients to obtain features in a lower dimension. The rPCA based features are used for constructing a self-organizing map (SOM). Finally, hierarchical clustering is applied on the SOM that separates anomalous signals residing in the smaller clusters and breaks them into logical units for contamination identification. The proposed methodology is tested using synthetic and real world EMG signals. The synthetic EMG signals are generated using a heteroscedastic process mimicking desired experimental setups. A sub-part of these synthetic signals is introduced with anomalies. These results are followed with real EMG signals introduced with synthetic anomalies. Finally, a heterogeneous real world data set is used with known quality issues under an unsupervised setting. The framework provides recall of 90% (± 3.3) and precision of 99%(±0.4).
1993-12-01
72 D. MINES AND THE MILITARY-TECHNOLOGICAL REVOLUTION ...................................... 74 E. CUSTOMIZING THE TDD PROLIFERATION MARKET M...Data Storage & Peripherals - Systems Managmnt Technologies 4. Passive Sensors - Sensors and Signal Processing 5. Photonics - Electronic and...a reproducible procedure to allow customization of the model, provides the "guts" of the method. 18 Third, because they are not optimized for
NASA Astrophysics Data System (ADS)
Sokolov, M. A.
This handbook treats the design and analysis of of pulsed radar receivers, with emphasis on elements (especially IC elements) that implement optimal and suboptimal algorithms. The design methodology is developed from the viewpoint of statistical communications theory. Particular consideration is given to the synthesis of single-channel and multichannel detectors, the design of analog and digital signal-processing devices, and the analysis of IF amplifiers.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
2016-01-01
A novel method of extracting heart rate and oxygen saturation from a video-based biosignal is described. The method comprises a novel modular continuous wavelet transform approach which includes: performing the transform, undertaking running wavelet archetyping to enhance the pulse information, extraction of the pulse ridge time–frequency information [and thus a heart rate (HRvid) signal], creation of a wavelet ratio surface, projection of the pulse ridge onto the ratio surface to determine the ratio of ratios from which a saturation trending signal is derived, and calibrating this signal to provide an absolute saturation signal (SvidO2). The method is illustrated through its application to a video photoplethysmogram acquired during a porcine model of acute desaturation. The modular continuous wavelet transform-based approach is advocated by the author as a powerful methodology to deal with noisy, non-stationary biosignals in general. PMID:27382479
Filter design for cancellation of baseline-fluctuation in needle EMG recordings.
Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J
2006-01-01
Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.
NASA Astrophysics Data System (ADS)
Nath, Nayani Kishore
2017-08-01
The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.
NASA Astrophysics Data System (ADS)
Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery
2013-04-01
Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.
The Influence of Measurement Methodology on the Accuracy of Electrical Waveform Distortion Analysis
NASA Astrophysics Data System (ADS)
Bartman, Jacek; Kwiatkowski, Bogdan
2018-04-01
The present paper covers a review of documents that specify measurement methods of voltage waveform distortion. It also presents measurement stages of waveform components that are uncommon in the classic fundamentals of electrotechnics and signal theory, including the creation process of groups and subgroups of harmonics and interharmonics. Moreover, the paper discusses selected distortion factors of periodic waveforms and presents analyses that compare the values of these distortion indices. The measurements were carried out in the cycle per cycle mode and the measurement methodology that was used complies with the IEC 61000-4-7 norm. The studies showed significant discrepancies between the values of analyzed parameters.
NASA Astrophysics Data System (ADS)
Li, Tanghua; Wu, Patrick; Wang, Hansheng; Jia, Lulu; Steffen, Holger
2018-03-01
The Gravity Recovery and Climate Experiment (GRACE) satellite mission measures the combined gravity signal of several overlapping processes. A common approach to separate the hydrological signal in previous ice-covered regions is to apply numerical models to simulate the glacial isostatic adjustment (GIA) signals related to the vanished ice load and then remove them from the observed GRACE data. However, the results of this method are strongly affected by the uncertainties of the ice and viscosity models of GIA. To avoid this, Wang et al. (Nat Geosci 6(1):38-42, 2013. https://doi.org/10.1038/NGEO1652; Geodesy Geodyn 6(4):267-273, 2015) followed the theory of Wahr et al. (Geophys Res Lett 22(8):977-980, 1995) and isolated water storage changes from GRACE in North America and Scandinavia with the help of Global Positioning System (GPS) data. Lambert et al. (Postglacial rebound and total water storage variations in the Nelson River drainage basin: a gravity GPS Study, Geological Survey of Canada Open File, 7317, 2013a, Geophys Res Lett 40(23):6118-6122, https://doi.org/10.1002/2013GL057973, 2013b) did a similar study for the Nelson River basin in North America but applying GPS and absolute gravity measurements. However, the results of the two studies in the Nelson River basin differ largely, especially for the magnitude of the hydrology signal which differs about 35%. Through detailed comparison and analysis of the input data, data post-processing techniques, methods and results of these two works, we find that the different GRACE data post-processing techniques may lead to this difference. Also the GRACE input has a larger effect on the hydrology signal amplitude than the GPS input in the Nelson River basin due to the relatively small uplift signal in this region. Meanwhile, the influence of the value of α , which represents the ratio between GIA-induced uplift rate and GIA-induced gravity-rate-of-change (before the correction for surface uplift), is more obvious in areas with high vertical uplift, but is smaller in the Nelson River basin. From Gaussian filtering of simulated data, we found that the magnitude of the peak gravity signal value can decrease significantly after Gaussian filtering with large average radius filter, but the effect in the Nelson River basin is rather small. More work is needed to understand the effect of amplitude restoration in the post-processing of GRACE g-dot signal. However, it is encouraging to find that both the methodologies of Wang et al. (2013, 2015) and Lambert et al. (2013a, b) can produce very similar results if their inputs are the same. This means that their methodologies can be applied to study the hydrology in other areas that are also affected by GIA provided that the effects of post-processing of their inputs are under control.
NASA Astrophysics Data System (ADS)
Rappleye, Devin Spencer
The development of electroanalytical techniques in multianalyte molten salt mixtures, such as those found in used nuclear fuel electrorefiners, would enable in situ, real-time concentration measurements. Such measurements are beneficial for process monitoring, optimization and control, as well as for international safeguards and nuclear material accountancy. Electroanalytical work in molten salts has been limited to single-analyte mixtures with a few exceptions. This work builds upon the knowledge of molten salt electrochemistry by performing electrochemical measurements on molten eutectic LiCl-KCl salt mixture containing two analytes, developing techniques for quantitatively analyzing the measured signals even with an additional signal from another analyte, correlating signals to concentration and identifying improvements in experimental and analytical methodologies. (Abstract shortened by ProQuest.).
Rapid development of xylanase assay conditions using Taguchi methodology.
Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath
2016-11-01
The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.
Porcar-Castell, Albert; Tyystjärvi, Esa; Atherton, Jon; van der Tol, Christiaan; Flexas, Jaume; Pfündel, Erhard E; Moreno, Jose; Frankenberg, Christian; Berry, Joseph A
2014-08-01
Chlorophyll a fluorescence (ChlF) has been used for decades to study the organization, functioning, and physiology of photosynthesis at the leaf and subcellular levels. ChlF is now measurable from remote sensing platforms. This provides a new optical means to track photosynthesis and gross primary productivity of terrestrial ecosystems. Importantly, the spatiotemporal and methodological context of the new applications is dramatically different compared with most of the available ChlF literature, which raises a number of important considerations. Although we have a good mechanistic understanding of the processes that control the ChlF signal over the short term, the seasonal link between ChlF and photosynthesis remains obscure. Additionally, while the current understanding of in vivo ChlF is based on pulse amplitude-modulated (PAM) measurements, remote sensing applications are based on the measurement of the passive solar-induced chlorophyll fluorescence (SIF), which entails important differences and new challenges that remain to be solved. In this review we introduce and revisit the physical, physiological, and methodological factors that control the leaf-level ChlF signal in the context of the new remote sensing applications. Specifically, we present the basis of photosynthetic acclimation and its optical signals, we introduce the physical and physiological basis of ChlF from the molecular to the leaf level and beyond, and we introduce and compare PAM and SIF methodology. Finally, we evaluate and identify the challenges that still remain to be answered in order to consolidate our mechanistic understanding of the remotely sensed SIF signal. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Balodis, Iris M; Potenza, Marc N
2015-03-01
Advances in brain imaging techniques have allowed neurobiological research to temporally analyze signals coding for the anticipation of reward. In addicted populations, both hyporesponsiveness and hyperresponsiveness of brain regions (e.g., ventral striatum) implicated in drug effects and reward system processing have been reported during anticipation of generalized reward. We discuss the current state of knowledge of reward processing in addictive disorders from a widely used and validated task: the monetary incentive delay task. Only studies applying the monetary incentive delay task in addicted and at-risk adult populations are reviewed, with a focus on anticipatory processing and striatal regions activated during task performance as well as the relationship of these regions with individual difference (e.g., impulsivity) and treatment outcome variables. We further review drug influences in challenge studies as a means to examine acute influences on reward processing in abstinent, recreationally using, and addicted populations. Generalized reward processing in addicted and at-risk populations is often characterized by divergent anticipatory signaling in the ventral striatum. Although methodologic and task variations may underlie some discrepant findings, anticipatory signaling in the ventral striatum may also be influenced by smoking status, drug metabolites, and treatment status in addicted populations. Divergent results across abstinent, recreationally using, and addicted populations demonstrate complexities in interpreting findings. Future studies would benefit from focusing on characterizing how impulsivity and other addiction-related features relate to anticipatory striatal signaling over time. Additionally, identifying how anticipatory signals recover or adjust after protracted abstinence will be important in understanding recovery processes. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2017-02-01
Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.
Hammerstein system represention of financial volatility processes
NASA Astrophysics Data System (ADS)
Capobianco, E.
2002-05-01
We show new modeling aspects of stock return volatility processes, by first representing them through Hammerstein Systems, and by then approximating the observed and transformed dynamics with wavelet-based atomic dictionaries. We thus propose an hybrid statistical methodology for volatility approximation and non-parametric estimation, and aim to use the information embedded in a bank of volatility sources obtained by decomposing the observed signal with multiresolution techniques. Scale dependent information refers both to market activity inherent to different temporally aggregated trading horizons, and to a variable degree of sparsity in representing the signal. A decomposition of the expansion coefficients in least dependent coordinates is then implemented through Independent Component Analysis. Based on the described steps, the features of volatility can be more effectively detected through global and greedy algorithms.
DOT National Transportation Integrated Search
2013-11-01
The Highway Capacity Manual (HCM) has had a delay-based level of service methodology for signalized intersections since 1985. : The 2010 HCM has revised the method for calculating delay. This happened concurrent with such jurisdictions as NYC reviewi...
Sassi, Roberto; Cerutti, Sergio; Lombardi, Federico; Malik, Marek; Huikuri, Heikki V; Peng, Chung-Kang; Schmidt, Georg; Yamamoto, Yoshiharu
2015-09-01
Following the publication of the Task Force document on heart rate variability (HRV) in 1996, a number of articles have been published to describe new HRV methodologies and their application in different physiological and clinical studies. This document presents a critical review of the new methods. A particular attention has been paid to methodologies that have not been reported in the 1996 standardization document but have been more recently tested in sufficiently sized populations. The following methods were considered: Long-range correlation and fractal analysis; Short-term complexity; Entropy and regularity; and Nonlinear dynamical systems and chaotic behaviour. For each of these methods, technical aspects, clinical achievements, and suggestions for clinical application were reviewed. While the novel approaches have contributed in the technical understanding of the signal character of HRV, their success in developing new clinical tools, such as those for the identification of high-risk patients, has been rather limited. Available results obtained in selected populations of patients by specialized laboratories are nevertheless of interest but new prospective studies are needed. The investigation of new parameters, descriptive of the complex regulation mechanisms of heart rate, has to be encouraged because not all information in the HRV signal is captured by traditional methods. The new technologies thus could provide after proper validation, additional physiological, and clinical meaning. Multidisciplinary dialogue and specialized courses in the combination of clinical cardiology and complex signal processing methods seem warranted for further advances in studies of cardiac oscillations and in the understanding normal and abnormal cardiac control processes. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.
Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio
2017-01-01
We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.
Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method
Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.
2017-01-01
We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) va...
Oliveiros, Bárbara
2016-01-01
Background Faces play a key role in signaling social cues such as signals of trustworthiness. Although several studies identify the amygdala as a core brain region in social cognition, quantitative approaches evaluating its role are scarce. Objectives This review aimed to assess the role of the amygdala in the processing of facial trustworthiness, by analyzing its amplitude BOLD response polarity to untrustworthy versus trustworthy facial signals under fMRI tasks through a Meta-analysis of effect sizes (MA). Activation Likelihood Estimation (ALE) analyses were also conducted. Data sources Articles were retrieved from MEDLINE, ScienceDirect and Web-of-Science in January 2016. Following the PRISMA statement guidelines, a systematic review of original research articles in English language using the search string “(face OR facial) AND (trustworthiness OR trustworthy OR untrustworthy OR trustee) AND fMRI” was conducted. Study selection and data extraction The MA concerned amygdala responses to facial trustworthiness for the contrast Untrustworthy vs. trustworthy faces, and included whole-brain and ROI studies. To prevent potential bias, results were considered even when at the single study level they did not survive correction for multiple comparisons or provided non-significant results. ALE considered whole-brain studies, using the same methodology to prevent bias. A summary of the methodological options (design and analysis) described in the articles was finally used to get further insight into the characteristics of the studies and to perform a subgroup analysis. Data were extracted by two authors and checked independently. Data synthesis Twenty fMRI studies were considered for systematic review. An MA of effect sizes with 11 articles (12 studies) showed high heterogeneity between studies [Q(11) = 265.68, p < .0001; I2 = 95.86%, 94.20% to 97.05%, with 95% confidence interval, CI]. Random effects analysis [RE(183) = 0.851, .422 to .969, 95% CI] supported the evidence that the (right) amygdala responds preferentially to untrustworthy faces. Moreover, two ALE analyses performed with 6 articles (7 studies) identified the amygdala, insula and medial dorsal nuclei of thalamus as structures with negative correlation with trustworthiness. Six articles/studies showed that posterior cingulate and medial frontal gyrus present positive correlations with increasing facial trustworthiness levels. Significant effects considering subgroup analysis based on methodological criteria were found for experiments using spatial smoothing, categorization of trustworthiness in 2 or 3 categories and paradigms which involve both explicit and implicit tasks. Limitations Significant heterogeneity between studies was found in MA, which might have arisen from inclusion of studies with smaller sample sizes and differences in methodological options. Studies using ROI analysis / small volume correction methods were more often devoted specifically to the amygdala region, with some results reporting uncorrected p-values based on mainly clinical a priori evidence of amygdala involvement in these processes. Nevertheless, we did not find significant evidence for publication bias. Conclusions and implications of key findings Our results support the role of amygdala in facial trustworthiness judgment, emphasizing its predominant role during processing of negative social signals in (untrustworthy) faces. This systematic review suggests that little consistency exists among studies’ methodology, and that larger sample sizes should be preferred. PMID:27898705
Risk-based Methodology for Validation of Pharmaceutical Batch Processes.
Wiles, Frederick
2013-01-01
In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.
Practical remarks on the heart rate and saturation measurement methodology
NASA Astrophysics Data System (ADS)
Kowal, M.; Kubal, S.; Piotrowski, P.; Staniec, K.
2017-05-01
A surface reflection-based method for measuring heart rate and saturation has been introduced as one having a significant advantage over legacy methods in that it lends itself for use in special applications such as those where a person’s mobility is of prime importance (e.g. during a miner’s work) and excluding the use of traditional clips. Then, a complete ATmega1281-based microcontroller platform has been described for performing computational tasks of signal processing and wireless transmission. In the next section remarks have been provided regarding the basic signal processing rules beginning with raw voltage samples of converted optical signals, their acquisition, storage and smoothing. This chapter ends with practical remarks demonstrating an exponential dependence between the minimum measurable heart rate and the readout resolution at different sampling frequencies for different cases of averaging depth (in bits). The following section is devoted strictly to the heart rate and hemoglobin oxygenation (saturation) measurement with the use of the presented platform, referenced to measurements obtained with a stationary certified pulsoxymeter.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
NASA Astrophysics Data System (ADS)
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.
An enhanced methodology for spacecraft correlation activity using virtual testing tools
NASA Astrophysics Data System (ADS)
Remedia, Marcello; Aglietti, Guglielmo S.; Appolloni, Matteo; Cozzani, Alessandro; Kiley, Andrew
2017-11-01
Test planning and post-test correlation activity have been issues of growing importance in the last few decades and many methodologies have been developed to either quantify or improve the correlation between computational and experimental results. In this article the methodologies established so far are enhanced with the implementation of a recently developed procedure called Virtual Testing. In the context of fixed-base sinusoidal tests (commonly used in the space sector for correlation), there are several factors in the test campaign that affect the behaviour of the satellite and are not normally taken into account when performing analyses: different boundary conditions created by the shaker's own dynamics, non-perfect control system, signal delays etc. All these factors are the core of the Virtual Testing implementation, which will be thoroughly explained in this article and applied to the specific case of Bepi-Colombo spacecraft tested on the ESA QUAD Shaker. Correlation activity will be performed in the various stages of the process, showing important improvements observed after applying the final complete methodology.
NASA Technical Reports Server (NTRS)
Madrid, G. A.; Westmoreland, P. T.
1983-01-01
A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.
Functional relationship-based alarm processing system
Corsberg, D.R.
1988-04-22
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary. 12 figs.
Functional relationship-based alarm processing
Corsberg, Daniel R.
1988-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated. Thus, each alarm's importance is continuously oupdated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on caussal factors between two alarms); (3) required action (system response or action) expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
Functional relationship-based alarm processing system
Corsberg, Daniel R.
1989-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
Monitoring Streambed Scour/Deposition Under Nonideal Temperature Signal and Flood Conditions
NASA Astrophysics Data System (ADS)
DeWeese, Timothy; Tonina, Daniele; Luce, Charles
2017-12-01
Streambed erosion and deposition are fundamental geomorphic processes in riverbeds, and monitoring their evolution is important for ecological system management and in-stream infrastructure stability. Previous research showed proof of concept that analysis of paired temperature signals of stream and pore waters can simultaneously provide monitoring scour and deposition, stream sediment thermal regime, and seepage velocity information. However, it did not address challenges often associated with natural systems, including nonideal temperature variations (low-amplitude, nonsinusoidal signal, and vertical thermal gradients) and natural flooding conditions on monitoring scour and deposition processes over time. Here we addressed this knowledge gap by testing the proposed thermal scour-deposition chain (TSDC) methodology, with laboratory experiments to test the impact of nonideal temperature signals under a range of seepage velocities and with a field application during a pulse flood. Both analyses showed excellent match between surveyed and temperature-derived bed elevation changes even under very low temperature signal amplitudes (less than 1°C), nonideal signal shape (sawtooth shape), and strong and changing vertical thermal gradients (4°C/m). Root-mean-square errors on predicting the change in streambed elevations were comparable with the median grain size of the streambed sediment. Future research should focus on improved techniques for temperature signal phase and amplitude extractions, as well as TSDC applications over long periods spanning entire hydrographs.
Puppo, A.; Chun, Jong T.; Gragnaniello, Giovanni; Garante, Ezio; Santella, Luigia
2008-01-01
Background When preparing for fertilization, oocytes undergo meiotic maturation during which structural changes occur in the endoplasmic reticulum (ER) that lead to a more efficient calcium response. During meiotic maturation and subsequent fertilization, the actin cytoskeleton also undergoes dramatic restructuring. We have recently observed that rearrangements of the actin cytoskeleton induced by actin-depolymerizing agents, or by actin-binding proteins, strongly modulate intracellular calcium (Ca2+) signals during the maturation process. However, the significance of the dynamic changes in F-actin within the fertilized egg has been largely unclear. Methodology/Principal Findings We have measured changes in intracellular Ca2+ signals and F-actin structures during fertilization. We also report the unexpected observation that the conventional antagonist of the InsP3 receptor, heparin, hyperpolymerizes the cortical actin cytoskeleton in postmeiotic eggs. Using heparin and other pharmacological agents that either hypo- or hyperpolymerize the cortical actin, we demonstrate that nearly all aspects of the fertilization process are profoundly affected by the dynamic restructuring of the egg cortical actin cytoskeleton. Conclusions/Significance Our findings identify important roles for subplasmalemmal actin fibers in the process of sperm-egg interaction and in the subsequent events related to fertilization: the generation of Ca2+ signals, sperm penetration, cortical granule exocytosis, and the block to polyspermy. PMID:18974786
Nested effects models for learning signaling networks from perturbation data.
Fröhlich, Holger; Tresch, Achim; Beissbarth, Tim
2009-04-01
Targeted gene perturbations have become a major tool to gain insight into complex cellular processes. In combination with the measurement of downstream effects via DNA microarrays, this approach can be used to gain insight into signaling pathways. Nested Effects Models were first introduced by Markowetz et al. as a probabilistic method to reverse engineer signaling cascades based on the nested structure of downstream perturbation effects. The basic framework was substantially extended later on by Fröhlich et al., Markowetz et al., and Tresch and Markowetz. In this paper, we present a review of the complete methodology with a detailed comparison of so far proposed algorithms on a qualitative and quantitative level. As an application, we present results on estimating the signaling network between 13 genes in the ER-alpha pathway of human MCF-7 breast cancer cells. Comparison with the literature shows a substantial overlap.
Ecological prediction with nonlinear multivariate time-frequency functional data models
Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.
2013-01-01
Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.
NASA Astrophysics Data System (ADS)
Torres-Arredondo, M.-A.; Sierra-Pérez, Julián; Cabanes, Guénaël
2016-05-01
The process of measuring and analysing the data from a distributed sensor network all over a structural system in order to quantify its condition is known as structural health monitoring (SHM). For the design of a trustworthy health monitoring system, a vast amount of information regarding the inherent physical characteristics of the sources and their propagation and interaction across the structure is crucial. Moreover, any SHM system which is expected to transition to field operation must take into account the influence of environmental and operational changes which cause modifications in the stiffness and damping of the structure and consequently modify its dynamic behaviour. On that account, special attention is paid in this paper to the development of an efficient SHM methodology where robust signal processing and pattern recognition techniques are integrated for the correct interpretation of complex ultrasonic waves within the context of damage detection and identification. The methodology is based on an acousto-ultrasonics technique where the discrete wavelet transform is evaluated for feature extraction and selection, linear principal component analysis for data-driven modelling and self-organising maps for a two-level clustering under the principle of local density. At the end, the methodology is experimentally demonstrated and results show that all the damages were detectable and identifiable.
Amezquita-Sanchez, Juan P.; Romero-Troncoso, Rene J.; Osornio-Rios, Roque A.; Garcia-Perez, Arturo
2014-01-01
This paper presents a new EEMD-MUSIC- (ensemble empirical mode decomposition-multiple signal classification-) based methodology to identify modal frequencies in structures ranging from free and ambient vibration signals produced by artificial and natural excitations and also considering several factors as nonstationary effects, close modal frequencies, and noisy environments, which are common situations where several techniques reported in literature fail. The EEMD and MUSIC methods are used to decompose the vibration signal into a set of IMFs (intrinsic mode functions) and to identify the natural frequencies of a structure, respectively. The effectiveness of the proposed methodology has been validated and tested with synthetic signals and under real operating conditions. The experiments are focused on extracting the natural frequencies of a truss-type scaled structure and of a bridge used for both highway traffic and pedestrians. Results show the proposed methodology as a suitable solution for natural frequencies identification of structures from free and ambient vibration signals. PMID:24683346
Camarena-Martinez, David; Amezquita-Sanchez, Juan P; Valtierra-Rodriguez, Martin; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Garcia-Perez, Arturo
2014-01-01
This paper presents a new EEMD-MUSIC- (ensemble empirical mode decomposition-multiple signal classification-) based methodology to identify modal frequencies in structures ranging from free and ambient vibration signals produced by artificial and natural excitations and also considering several factors as nonstationary effects, close modal frequencies, and noisy environments, which are common situations where several techniques reported in literature fail. The EEMD and MUSIC methods are used to decompose the vibration signal into a set of IMFs (intrinsic mode functions) and to identify the natural frequencies of a structure, respectively. The effectiveness of the proposed methodology has been validated and tested with synthetic signals and under real operating conditions. The experiments are focused on extracting the natural frequencies of a truss-type scaled structure and of a bridge used for both highway traffic and pedestrians. Results show the proposed methodology as a suitable solution for natural frequencies identification of structures from free and ambient vibration signals.
Estimation of effect of hydrogen on the parameters of magnetoacoustic emission signals
NASA Astrophysics Data System (ADS)
Skalskyi, Valentyn; Stankevych, Olena; Dubytskyi, Olexandr
2018-05-01
The features of the magnetoacoustic emission (MAE) signals during magnetization of structural steels with the different degree of hydrogenating were investigated by the wavelet transform. The dominant frequency ranges of MAE signals for the different magnetic field strength were determined using Discrete Wavelet Transform (DWT), and the energy and spectral parameters of MAE signals were determined using Continuous Wavelet Transform (CWT). The characteristic differences of the local maximums of signals according to energy, bandwidth, duration and frequency were found. The methodology of estimation of state of local degradation of materials by parameters of wavelet transform of MAE signals was proposed. This methodology was approbated for investigate of state of long-time exploitations structural steels of oil and gas pipelines.
Application of Design Methodologies for Feedback Compensation Associated with Linear Systems
NASA Technical Reports Server (NTRS)
Smith, Monty J.
1996-01-01
The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.
A unified framework for physical print quality
NASA Astrophysics Data System (ADS)
Eid, Ahmed; Cooper, Brian; Rippetoe, Ed
2007-01-01
In this paper we present a unified framework for physical print quality. This framework includes a design for a testbed, testing methodologies and quality measures of physical print characteristics. An automatic belt-fed flatbed scanning system is calibrated to acquire L* data for a wide range of flat field imagery. Testing methodologies based on wavelet pre-processing and spectral/statistical analysis are designed. We apply the proposed framework to three common printing artifacts: banding, jitter, and streaking. Since these artifacts are directional, wavelet based approaches are used to extract one artifact at a time and filter out other artifacts. Banding is characterized as a medium-to-low frequency, vertical periodic variation down the page. The same definition is applied to the jitter artifact, except that the jitter signal is characterized as a high-frequency signal above the banding frequency range. However, streaking is characterized as a horizontal aperiodic variation in the high-to-medium frequency range. Wavelets at different levels are applied to the input images in different directions to extract each artifact within specified frequency bands. Following wavelet reconstruction, images are converted into 1-D signals describing the artifact under concern. Accurate spectral analysis using a DFT with Blackman-Harris windowing technique is used to extract the power (strength) of periodic signals (banding and jitter). Since streaking is an aperiodic signal, a statistical measure is used to quantify the streaking strength. Experiments on 100 print samples scanned at 600 dpi from 10 different printers show high correlation (75% to 88%) between the ranking of these samples by the proposed metrologies and experts' visual ranking.
Quantitative evaluation of photoplethysmographic artifact reduction for pulse oximetry
NASA Astrophysics Data System (ADS)
Hayes, Matthew J.; Smith, Peter R.
1999-01-01
Motion artefact corruption of pulse oximeter output, causing both measurement inaccuracies and false alarm conditions, is a primary restriction in the current clinical practice and future applications of this useful technique. Artefact reduction in photoplethysmography (PPG), and therefore by application in pulse oximetry, is demonstrated using a novel non-linear methodology recently proposed by the authors. The significance of these processed PPG signals for pulse oximetry measurement is discussed, with particular attention to the normalization inherent in the artefact reduction process. Quantitative experimental investigation of the performance of PPG artefact reduction is then utilized to evaluate this technology for application to pulse oximetry. While the successfully demonstrated reduction of severe artefacts may widen the applicability of all PPG technologies and decrease the occurrence of pulse oximeter false alarms, the observed reduction of slight artefacts suggests that many such effects may go unnoticed in clinical practice. The signal processing and output averaging used in most commercial oximeters can incorporate these artefact errors into the output, while masking the true PPG signal corruption. It is therefore suggested that PPG artefact reduction should be incorporated into conventional pulse oximetry measurement, even in the absence of end-user artefact problems.
Mitochondrial redox and pH signaling occurs in axonal and synaptic organelle clusters.
Breckwoldt, Michael O; Armoundas, Antonis A; Aon, Miguel A; Bendszus, Martin; O'Rourke, Brian; Schwarzländer, Markus; Dick, Tobias P; Kurz, Felix T
2016-03-22
Redox switches are important mediators in neoplastic, cardiovascular and neurological disorders. We recently identified spontaneous redox signals in neurons at the single mitochondrion level where transients of glutathione oxidation go along with shortening and re-elongation of the organelle. We now have developed advanced image and signal-processing methods to re-assess and extend previously obtained data. Here we analyze redox and pH signals of entire mitochondrial populations. In total, we quantified the effects of 628 redox and pH events in 1797 mitochondria from intercostal axons and neuromuscular synapses using optical sensors (mito-Grx1-roGFP2; mito-SypHer). We show that neuronal mitochondria can undergo multiple redox cycles exhibiting markedly different signal characteristics compared to single redox events. Redox and pH events occur more often in mitochondrial clusters (medium cluster size: 34.1 ± 4.8 μm(2)). Local clusters possess higher mitochondrial densities than the rest of the axon, suggesting morphological and functional inter-mitochondrial coupling. We find that cluster formation is redox sensitive and can be blocked by the antioxidant MitoQ. In a nerve crush paradigm, mitochondrial clusters form sequentially adjacent to the lesion site and oxidation spreads between mitochondria. Our methodology combines optical bioenergetics and advanced signal processing and allows quantitative assessment of entire mitochondrial populations.
Effects of image processing on the detective quantum efficiency
NASA Astrophysics Data System (ADS)
Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na
2010-04-01
Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.
Ma, Haijun; Russek-Cohen, Estelle; Izem, Rima; Marchenko, Olga V; Jiang, Qi
2018-03-01
Safety evaluation is a key aspect of medical product development. It is a continual and iterative process requiring thorough thinking, and dedicated time and resources. In this article, we discuss how safety data are transformed into evidence to establish and refine the safety profile of a medical product, and how the focus of safety evaluation, data sources, and statistical methods change throughout a medical product's life cycle. Some challenges and statistical strategies for medical product safety evaluation are discussed. Examples of safety issues identified in different periods, that is, premarketing and postmarketing, are discussed to illustrate how different sources are used in the safety signal identification and the iterative process of safety assessment. The examples highlighted range from commonly used pediatric vaccine given to healthy children to medical products primarily used to treat a medical condition in adults. These case studies illustrate that different products may require different approaches, and once a signal is discovered, it could impact future safety assessments. Many challenges still remain in this area despite advances in methodologies, infrastructure, public awareness, international harmonization, and regulatory enforcement. Innovations in safety assessment methodologies are pressing in order to make the medical product development process more efficient and effective, and the assessment of medical product marketing approval more streamlined and structured. Health care payers, providers, and patients may have different perspectives when weighing in on clinical, financial and personal needs when therapies are being evaluated.
An Assessment of Behavioral Dynamic Information Processing Measures in Audiovisual Speech Perception
Altieri, Nicholas; Townsend, James T.
2011-01-01
Research has shown that visual speech perception can assist accuracy in identification of spoken words. However, little is known about the dynamics of the processing mechanisms involved in audiovisual integration. In particular, architecture and capacity, measured using response time methodologies, have not been investigated. An issue related to architecture concerns whether the auditory and visual sources of the speech signal are integrated “early” or “late.” We propose that “early” integration most naturally corresponds to coactive processing whereas “late” integration corresponds to separate decisions parallel processing. We implemented the double factorial paradigm in two studies. First, we carried out a pilot study using a two-alternative forced-choice discrimination task to assess architecture, decision rule, and provide a preliminary assessment of capacity (integration efficiency). Next, Experiment 1 was designed to specifically assess audiovisual integration efficiency in an ecologically valid way by including lower auditory S/N ratios and a larger response set size. Results from the pilot study support a separate decisions parallel, late integration model. Results from both studies showed that capacity was severely limited for high auditory signal-to-noise ratios. However, Experiment 1 demonstrated that capacity improved as the auditory signal became more degraded. This evidence strongly suggests that integration efficiency is vitally affected by the S/N ratio. PMID:21980314
Chiarle, Alberto; Isaia, Marco
2013-07-01
In this study, we compare the courtship behaviours of Pardosa proxima and P. vlijmi, two species of wolf spiders up to now regarded as "ethospecies", by means of motion analysis methodologies. In particular, we investigate the features of the signals, aiming at understanding the evolution of the courtship and its role in species delimitation and speciation processes. In our model, we highlight a modular structure of the behaviours and the presence of recurring units and phases. According to other similar cases concerning animal communication, we observed one highly variable and one stereotyped phase for both species. The stereotyped phase is here regarded as a signal related to species identity or an honest signal linked directly to the quality of the signaler. On the contrary, the variable phase aims to facilitate signal detection and assessment by the female reducing choice costs or errors. Variable phases include cues arisen from Fisherian runaway selection, female sensory exploitation and remaining of past selections. Copyright © 2013 Elsevier B.V. All rights reserved.
Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo
2009-01-01
Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.
Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language
Narayanan, Shrikanth; Georgiou, Panayiotis G.
2013-01-01
The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion. PMID:24039277
Przylibski, Tadeusz Andrzej; Wyłomańska, Agnieszka; Zimroz, Radosław; Fijałkowska-Lichwa, Lidia
2015-10-01
The authors present an application of spectral decomposition of (222)Rn activity concentration signal series as a mathematical tool used for distinguishing processes determining temporal changes of radon concentration in cave air. The authors demonstrate that decomposition of monitored signal such as (222)Rn activity concentration in cave air facilitates characterizing the processes affecting changes in the measured concentration of this gas. Thanks to this, one can better correlate and characterize the influence of various processes on radon behaviour in cave air. Distinguishing and characterising these processes enables the understanding of radon behaviour in cave environment and it may also enable and facilitate using radon as a precursor of geodynamic phenomena in the lithosphere. Thanks to the conducted analyses, the authors confirmed the unquestionable influence of convective air exchange between the cave and the atmosphere on seasonal and short-term (diurnal) changes in (222)Rn activity concentration in cave air. Thanks to the applied methodology of signal analysis and decomposition, the authors also identified a third process affecting (222)Rn activity concentration changes in cave air. This is a deterministic process causing changes in radon concentration, with a distribution different from the Gaussian one. The authors consider these changes to be the effect of turbulent air movements caused by the movement of visitors in caves. This movement is heterogeneous in terms of the number of visitors per group and the number of groups visiting a cave per day and per year. Such a process perfectly elucidates the observed character of the registered changes in (222)Rn activity concentration in one of the decomposed components of the analysed signal. The obtained results encourage further research into precise relationships between the registered (222)Rn activity concentration changes and factors causing them, as well as into using radon as a precursor of geodynamic phenomena in the lithosphere. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2017-01-15
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.
Quality control methodology for high-throughput protein-protein interaction screening.
Vazquez, Alexei; Rual, Jean-François; Venkatesan, Kavitha
2011-01-01
Protein-protein interactions are key to many aspects of the cell, including its cytoskeletal structure, the signaling processes in which it is involved, or its metabolism. Failure to form protein complexes or signaling cascades may sometimes translate into pathologic conditions such as cancer or neurodegenerative diseases. The set of all protein interactions between the proteins encoded by an organism constitutes its protein interaction network, representing a scaffold for biological function. Knowing the protein interaction network of an organism, combined with other sources of biological information, can unravel fundamental biological circuits and may help better understand the molecular basics of human diseases. The protein interaction network of an organism can be mapped by combining data obtained from both low-throughput screens, i.e., "one gene at a time" experiments and high-throughput screens, i.e., screens designed to interrogate large sets of proteins at once. In either case, quality controls are required to deal with the inherent imperfect nature of experimental assays. In this chapter, we discuss experimental and statistical methodologies to quantify error rates in high-throughput protein-protein interactions screens.
Co-activation patterns in resting-state fMRI signals.
Liu, Xiao; Zhang, Nanyin; Chang, Catie; Duyn, Jeff H
2018-02-08
The brain is a complex system that integrates and processes information across multiple time scales by dynamically coordinating activities over brain regions and circuits. Correlations in resting-state functional magnetic resonance imaging (rsfMRI) signals have been widely used to infer functional connectivity of the brain, providing a metric of functional associations that reflects a temporal average over an entire scan (typically several minutes or longer). Not until recently was the study of dynamic brain interactions at much shorter time scales (seconds to minutes) considered for inference of functional connectivity. One method proposed for this objective seeks to identify and extract recurring co-activation patterns (CAPs) that represent instantaneous brain configurations at single time points. Here, we review the development and recent advancement of CAP methodology and other closely related approaches, as well as their applications and associated findings. We also discuss the potential neural origins and behavioral relevance of CAPs, along with methodological issues and future research directions in the analysis of fMRI co-activation patterns. Copyright © 2018 Elsevier Inc. All rights reserved.
How Do You Determine Whether The Earth Is Warming Up?
NASA Astrophysics Data System (ADS)
Restrepo, J. M.; Comeau, D.; Flaschka, H.
2012-12-01
How does one determine whether the extreme summer temperatures in the North East of the US, or in Moscow during the summer of 2010, was an extreme weather fluctuation or the result of a systematic global climate warming trend? It is only under exceptional circumstances that one can determine whether an observational climate signal belongs to a particular statistical distribution. In fact, observed climate signals are rarely "statistical" and thus there is usually no way to rigorously obtain enough field data to produce a trend or tendency, based upon data alone. Furthermore, this type of data is often multi-scale. We propose a trend or tendency methodology that does not make use of a parametric or a statistical assumption. The most important feature of this trend strategy is that it is defined in very precise mathematical terms. The tendency is easily understood and practical, and its algorithmic realization is fairly robust. In addition to proposing a trend, the methodology can be adopted to generate surrogate statistical models, useful in reduced filtering schemes of time dependent processes.
Fuzzy Logic-Based Audio Pattern Recognition
NASA Astrophysics Data System (ADS)
Malcangi, M.
2008-11-01
Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.
Infrasonic component of volcano-seismic eruption tremor
NASA Astrophysics Data System (ADS)
Matoza, Robin S.; Fee, David
2014-03-01
Air-ground and ground-air elastic wave coupling are key processes in the rapidly developing field of seismoacoustics and are particularly relevant for volcanoes. During a sustained explosive volcanic eruption, it is typical to record a sustained broadband signal on seismometers, termed eruption tremor. Eruption tremor is usually attributed to a subsurface seismic source process, such as the upward migration of magma and gases through the shallow conduit and vent. However, it is now known that sustained explosive volcanic eruptions also generate powerful tremor signals in the atmosphere, termed infrasonic tremor. We investigate infrasonic tremor coupling down into the ground and its contribution to the observed seismic tremor. Our methodology builds on that proposed by Ichihara et al. (2012) and involves cross-correlation, coherence, and cross-phase spectra between waveforms from nearly collocated seismic and infrasonic sensors; we apply it to datasets from Mount St. Helens, Tungurahua, and Redoubt Volcanoes.
NASA Astrophysics Data System (ADS)
Vidovič, Luka; Milanič, Matija; Majaron, Boris
2015-01-01
Pulsed photothermal radiometry (PPTR) allows noninvasive determination of laser-induced temperature depth profiles in optically scattering layered structures. The obtained profiles provide information on spatial distribution of selected chromophores such as melanin and hemoglobin in human skin. We apply the described approach to study time evolution of incidental bruises (hematomas) in human subjects. By combining numerical simulations of laser energy deposition in bruised skin with objective fitting of the predicted and measured PPTR signals, we can quantitatively characterize the key processes involved in bruise evolution (i.e., hemoglobin mass diffusion and biochemical decomposition). Simultaneous analysis of PPTR signals obtained at various times post injury provides an insight into the variations of these parameters during the bruise healing process. The presented methodology and results advance our understanding of the bruise evolution and represent an important step toward development of an objective technique for age determination of traumatic bruises in forensic medicine.
Smart fabrics: integrating fiber optic sensors and information networks.
El-Sherif, Mahmoud
2004-01-01
"Smart Fabrics" are defined as fabrics capable of monitoring their own "health", and sensing environmental conditions. They consist of special type of sensors, signal processing, and communication network embedded into textile substrate. Available conventional sensors and networking systems are not fully technologically mature for such applications. New classes of miniature sensors, signal processing and networking systems are urgently needed for such application. Also, the methodology for integration into textile structures has to be developed. In this paper, the development of smart fabrics with embedded fiber optic systems is presented for applications in health monitoring and diagnostics. Successful development of such smart fabrics with embedded sensors and networks is mainly dependent on the development of the proper miniature sensors technology, and on the integration of these sensors into textile structures. The developed smart fabrics will be discussed and samples of the results will be presented.
Casaseca-de-la-Higuera, Pablo; Simmross-Wattenberg, Federico; Martín-Fernández, Marcos; Alberola-López, Carlos
2009-07-01
Discontinuation of mechanical ventilation is a challenging task that involves a number of subtle clinical issues. The gradual removal of the respiratory support (referred to as weaning) should be performed as soon as autonomous respiration can be sustained. However, the prediction rate of successful extubation is still below 25% based on previous studies. Construction of an automatic system that provides information on extubation readiness is thus desirable. Recent works have demonstrated that the breathing pattern variability is a useful extubation readiness indicator, with improving performance when multiple respiratory signals are jointly processed. However, the existing methods for predictor extraction present several drawbacks when length-limited time series are to be processed in heterogeneous groups of patients. In this paper, we propose a model-based methodology for automatic readiness prediction. It is intended to deal with multichannel, nonstationary, short records of the breathing pattern. Results on experimental data yield an 87.27% of successful readiness prediction, which is in line with the best figures reported in the literature. A comparative analysis shows that our methodology overcomes the shortcomings of so far proposed methods when applied to length-limited records on heterogeneous groups of patients.
In situ photoacoustic characterization for porous silicon growing: Detection principles
NASA Astrophysics Data System (ADS)
Ramirez-Gutierrez, C. F.; Castaño-Yepes, J. D.; Rodriguez-García, M. E.
2016-05-01
There are a few methodologies for monitoring the in-situ formation of Porous Silicon (PS). One of the methodologies is photoacoustic. Previous works that reported the use of photoacoustic to study the PS formation do not provide the physical explanation of the origin of the signal. In this paper, a physical explanation of the origin of the photoacoustic signal during the PS etching is provided. The incident modulated radiation and changes in the reflectance are taken as thermal sources. In this paper, a useful methodology is proposed to determine the etching rate, porosity, and refractive index of a PS film by the determination of the sample thickness, using scanning electron microscopy images. This method was developed by carrying out two different experiments using the same anodization conditions. The first experiment consisted of growth of the samples with different etching times to prove the periodicity of the photoacoustic signal, while the second one considered the growth samples using three different wavelengths that are correlated with the period of the photoacoustic signal. The last experiment showed that the period of the photoacoustic signal is proportional to the laser wavelength.
Rehm, Markus; Prehn, Jochen H M
2013-06-01
Systems biology and systems medicine, i.e. the application of systems biology in a clinical context, is becoming of increasing importance in biology, drug discovery and health care. Systems biology incorporates knowledge and methods that are applied in mathematics, physics and engineering, but may not be part of classical training in biology. We here provide an introduction to basic concepts and methods relevant to the construction and application of systems models for apoptosis research. We present the key methods relevant to the representation of biochemical processes in signal transduction models, with a particular reference to apoptotic processes. We demonstrate how such models enable a quantitative and temporal analysis of changes in molecular entities in response to an apoptosis-inducing stimulus, and provide information on cell survival and cell death decisions. We introduce methods for analyzing the spatial propagation of cell death signals, and discuss the concepts of sensitivity analyses that enable a prediction of network responses to disturbances of single or multiple parameters. Copyright © 2013 Elsevier Inc. All rights reserved.
Technical solutions for simultaneous MEG and SEEG recordings: towards routine clinical use.
Badier, J M; Dubarry, A S; Gavaret, M; Chen, S; Trébuchon, A S; Marquis, P; Régis, J; Bartolomei, F; Bénar, C G; Carron, R
2017-09-21
The simultaneous recording of intracerebral EEG (stereotaxic EEG, SEEG) and magnetoencephalography (MEG) is a promising strategy that provides both local and global views on brain pathological activity. Yet, acquiring simultaneous signals poses difficult technical issues that hamper their use in clinical routine. Our objective was thus to develop a set of solutions for recording a high number of SEEG channels while preserving signal quality. We recorded data in a patient with drug resistant epilepsy during presurgical evaluation. We used dedicated insertion screws and optically insulated amplifiers. We recorded 137 SEEG contacts on 10 depth electrodes (5-15 contacts each) and 248 MEG channels (magnetometers). Signal quality was assessed by comparing the distribution of RMS values in different frequency bands to a reference set of MEG acquisitions. The quality of signals was excellent for both MEG and SEEG; for MEG, it was comparable to that of MEG signals without concurrent SEEG. Discharges involving several structures on SEEG were visible on MEG, whereas discharges limited in space were not seen at the surface. SEEG can now be recorded simultaneously with whole-head MEG in routine. This opens new avenues, both methodologically for understanding signals and improving signal processing methods, and clinically for future combined analyses.
Use phase signals to promote lifetime extension for Windows PCs.
Hickey, Stewart; Fitzpatrick, Colin; O'Connell, Maurice; Johnson, Michael
2009-04-01
This paper proposes a signaling methodology for personal computers. Signaling may be viewed as an ecodesign strategy that can positively influence the consumer to consumer (C2C) market process. A number of parameters are identified that can provide the basis for signal implementation. These include operating time, operating temperature, operating voltage, power cycle counts, hard disk drive (HDD) self-monitoring, and reporting technology (SMART) attributes and operating system (OS) event information. All these parameters are currently attainable or derivable via embedded technologies in modern desktop systems. A case study detailing a technical implementation of how the development of signals can be achieved in personal computers that incorporate Microsoft Windows operating systems is presented. Collation of lifetime temperature data from a system processor is demonstrated as a possible means of characterizing a usage profile for a desktop system. In addition, event log data is utilized for devising signals indicative of OS quality. The provision of lifetime usage data in the form of intuitive signals indicative of both hardware and software quality can in conjunction with consumer education facilitate an optimal remarketing strategy for used systems. This implementation requires no additional hardware.
Talio, María C; Zambrano, Karen; Kaplan, Marcos; Acosta, Mariano; Gil, Raúl A; Luconi, Marta O; Fernández, Liliana P
2015-10-01
A new environmental friendly methodology based on fluorescent signal enhancement of rhodamine B dye is proposed for Pb(II) traces quantification using a preconcentration step based on the coacervation phenomenon. A cationic surfactant (cetyltrimethylammonium bromide, CTAB) and potassium iodine were chosen for this aim. The coacervate phase was collected on a filter paper disk and the solid surface fluorescence signal was determined in a spectrofluorometer. Experimental variables that influence on preconcentration step and fluorimetric sensitivity have been optimized using uni-variation assays. The calibration graph using zero th order regression was linear from 7.4×10(-4) to 3.4 μg L(-1) with a correlation coefficient of 0.999. Under the optimal conditions, a limit of detection of 2.2×10(-4) μg L(-1) and a limit of quantification of 7.4×10(-4) μg L(-1) were obtained. The method showed good sensitivity, adequate selectivity with good tolerance to foreign ions, and was applied to the determination of trace amounts of Pb(II) in refill solutions for e-cigarettes with satisfactory results validated by ICP-MS. The proposed method represents an innovative application of coacervation processes and of paper filters to solid surface fluorescence methodology. Copyright © 2015 Elsevier B.V. All rights reserved.
Bianchi-Smiraglia, Anna; Lipchick, Brittany C; Nikiforov, Mikhail A
2017-01-01
Activation of oncogenic signaling paradoxically results in the permanent withdrawal from cell cycle and induction of senescence (oncogene-induced senescence (OIS)). OIS is a fail-safe mechanism used by the cells to prevent uncontrolled tumor growth, and, as such, it is considered as the first barrier against cancer. In order to progress, tumor cells thus need to first overcome the senescent phenotype. Despite the increasing attention gained by OIS in the past 20 years, this field is still rather young due to continuous emergence of novel pathways and processes involved in OIS. Among the many factors contributing to incomplete understanding of OIS are the lack of unequivocal markers for senescence and the complexity of the phenotypes revealed by senescent cells in vivo and in vitro. OIS has been shown to play major roles at both the cellular and organismal levels in biological processes ranging from embryonic development to barrier to cancer progression. Here we will briefly outline major advances in methodologies that are being utilized for induction, identification, and characterization of molecular processes in cells undergoing oncogene-induced senescence. The full description of such methodologies is provided in the corresponding chapters of the book.
Bed management team with Kanban web-based application.
Rocha, Hermano Alexandre Lima; Santos, Ana Kelly Lima da Cruz; Alcântara, Antônia Celia de Castro; Lima, Carmen Sulinete Suliano da Costa; Rocha, Sabrina Gabriele Maia Oliveira; Cardoso, Roberto Melo; Cremonin, Jair Rodrigues
2018-05-15
To measure the effectiveness of the bed management process that uses a web-based application with Kanban methodology to reduce hospitalization time of hospitalized patients. Before-after study was performed. The study was conducted between July 2013 and July 2017, at the Unimed Regional Hospital of Fortaleza, which has 300 beds, of which 60 are in the intensive care unit (ICU). It is accredited by International Society for Quality in Healthcare. Patients hospitalized in the referred period. Bed management with an application that uses color logic to signal at which stage of high flow the patients meet, in which each patient is interpreted as a card of the classical Kanban theory. It has an automatic user signaling system for process movement, and a system for monitoring and analyzing discharge forecasts. Length of hospital stay, number of customer complaints related to bed availability. After the intervention, the hospital's overall hospital stay time was reduced from 5.6 days to 4.9 days (P = 0.001). The units with the greatest reduction were the ICUs, with reduction from 6.0 days to 2.0 (P = 0.001). The relative percentage of complaints regarding bed availability in the hospital fell from 27% to 0%. We conclude that the use of an electronic tool based on Kanban methodology and accessed via the web by a bed management team is effective in reducing patients' hospital stay time.
Kepler AutoRegressive Planet Search: Motivation & Methodology
NASA Astrophysics Data System (ADS)
Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian
2015-08-01
The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.
Brain-computer interface using wavelet transformation and naïve bayes classifier.
Bassani, Thiago; Nievola, Julio Cesar
2010-01-01
The main purpose of this work is to establish an exploratory approach using electroencephalographic (EEG) signal, analyzing the patterns in the time-frequency plane. This work also aims to optimize the EEG signal analysis through the improvement of classifiers and, eventually, of the BCI performance. In this paper a novel exploratory approach for data mining of EEG signal based on continuous wavelet transformation (CWT) and wavelet coherence (WC) statistical analysis is introduced and applied. The CWT allows the representation of time-frequency patterns of the signal's information content by WC qualiatative analysis. Results suggest that the proposed methodology is capable of identifying regions in time-frequency spectrum during the specified task of BCI. Furthermore, an example of a region is identified, and the patterns are classified using a Naïve Bayes Classifier (NBC). This innovative characteristic of the process justifies the feasibility of the proposed approach to other data mining applications. It can open new physiologic researches in this field and on non stationary time series analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yingchun; Yang, Feng; Fu, Yi
Abstract - Brain development and spinal cord regeneration require neurite sprouting and growth cone navigation in response to extension and collapsing factors present in the extracellular environment. These external guidance cues control neurite growth cone extension and retraction processes through intracellular protein phosphorylation of numerous cytoskeletal, adhesion, and polarity complex signaling proteins. However, the complex kinase/substrate signaling networks that mediate neuritogenesis have not been investigated. Here, we compare the neurite phosphoproteome under growth and retraction conditions using neurite purification methodology combined with mass spectrometry. More than 4000 non-redundant phosphorylation sites from 1883 proteins have been annotated and mapped to signalingmore » pathways that control kinase/phosphatase networks, cytoskeleton remodeling, and axon/dendrite specification. Comprehensive informatics and functional studies revealed a compartmentalized ERK activation/deactivation cytoskeletal switch that governs neurite growth and retraction, respectively. Our findings provide the first system-wide analysis of the phosphoprotein signaling networks that enable neurite growth and retraction and reveal an important molecular switch that governs neuritogenesis.« less
Fringe pattern information retrieval using wavelets
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Patimo, Caterina; Manicone, Pasquale D.; Lamberti, Luciano
2005-08-01
Two-dimensional phase modulation is currently the basic model used in the interpretation of fringe patterns that contain displacement information, moire, holographic interferometry, speckle techniques. Another way to look to these two-dimensional signals is to consider them as frequency modulated signals. This alternative interpretation has practical implications similar to those that exist in radio engineering for handling frequency modulated signals. Utilizing this model it is possible to obtain frequency information by using the energy approach introduced by Ville in 1944. A natural complementary tool of this process is the wavelet methodology. The use of wavelet makes it possible to obtain the local values of the frequency in a one or two dimensional domain without the need of previous phase retrieval and differentiation. Furthermore from the properties of wavelets it is also possible to obtain at the same time the phase of the signal with the advantage of a better noise removal capabilities and the possibility of developing simpler algorithms for phase unwrapping due to the availability of the derivative of the phase.
RASSP signal processing architectures
NASA Astrophysics Data System (ADS)
Shirley, Fred; Bassett, Bob; Letellier, J. P.
1995-06-01
The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jornet, N; Carrasco de Fez, P; Jordi, O
Purpose: To evaluate the accuracy in total scatter factor (Sc,p) determination for small fields using commercial plastic scintillator detector (PSD). The manufacturer's spectral discrimination method to subtract Cerenkov light from the signal is discussed. Methods: Sc,p for field sizes ranging from 0.5 to 10 cm were measured using PSD Exradin (Standard Imaging) connected to two channel electrometer measuring the signals in two different spectral regions to subtract the Cerenkov signal from the PSD signal. A Pinpoint ionisation chamber 31006 (PTW) and a non-shielded semiconductor detector EFD (Scanditronix) were used for comparison. Measures were performed for a 6 MV X-ray beam.more » The Sc,p are measured at 10 cm depth in water for a SSD=100 cm and normalized to a 10'10 cm{sup 2} field size at the isocenter. All detectors were placed with their symmetry axis parallel to the beam axis.We followed the manufacturer's recommended calibration methodology to subtract the Cerenkov contribution to the signal as well as a modified method using smaller field sizes. The Sc,p calculated by using both calibration methodologies were compared. Results: Sc,p measured with the semiconductor and the PinPoint detectors agree, within 1.5%, for field sizes between 10'10 and 1'1 cm{sup 2}. Sc,p measured with the PSD using the manufacturer's calibration methodology were systematically 4% higher than those measured with the semiconductor detector for field sizes smaller than 5'5 cm{sup 2}. By using a modified calibration methodology for smalls fields and keeping the manufacturer calibration methodology for fields larger than 5'5cm{sup 2} field Sc,p matched semiconductor results within 2% field sizes larger than 1.5 cm. Conclusion: The calibration methodology proposed by the manufacturer is not appropriate for dose measurements in small fields. The calibration parameters are not independent of the incident radiation spectrum for this PSD. This work was partially financed by grant 2012 of Barcelona board of the AECC.« less
Evaluating the Safety Profile of Non-Active Implantable Medical Devices Compared with Medicines.
Pane, Josep; Coloma, Preciosa M; Verhamme, Katia M C; Sturkenboom, Miriam C J M; Rebollo, Irene
2017-01-01
Recent safety issues involving non-active implantable medical devices (NAIMDs) have highlighted the need for better pre-market and post-market evaluation. Some stakeholders have argued that certain features of medicine safety evaluation should also be applied to medical devices. Our objectives were to compare the current processes and methodologies for the assessment of NAIMD safety profiles with those for medicines, identify potential gaps, and make recommendations for the adoption of new methodologies for the ongoing benefit-risk monitoring of these devices throughout their entire life cycle. A literature review served to examine the current tools for the safety evaluation of NAIMDs and those for medicines. We searched MEDLINE using these two categories. We supplemented this search with Google searches using the same key terms used in the MEDLINE search. Using a comparative approach, we summarized the new product design, development cycle (preclinical and clinical phases), and post-market phases for NAIMDs and drugs. We also evaluated and compared the respective processes to integrate and assess safety data during the life cycle of the products, including signal detection, signal management, and subsequent potential regulatory actions. The search identified a gap in NAIMD safety signal generation: no global program exists that collects and analyzes adverse events and product quality issues. Data sources in real-world settings, such as electronic health records, need to be effectively identified and explored as additional sources of safety information, particularly in some areas such as the EU and USA where there are plans to implement the unique device identifier (UDI). The UDI and other initiatives will enable more robust follow-up and assessment of long-term patient outcomes. The safety evaluation system for NAIMDs differs in many ways from those for drugs, but both systems face analogous challenges with respect to monitoring real-world usage. Certain features of the drug safety evaluation process could, if adopted and adapted for NAIMDs, lead to better and more systematic evaluations of the latter.
Simulation of flashing signal operations.
DOT National Transportation Integrated Search
1982-01-01
Various guidelines that have been proposed for the operation of traffic signals in the flashing mode were reviewed. The use of existing traffic simulation procedures to evaluate flashing signals was examined and a study methodology for simulating and...
Change in physiological signals during mindfulness meditation
Ahani, Asieh; Wahbeh, Helane; Miller, Meghan; Nezamfar, Hooman; Erdogmus, Deniz; Oken, Barry
2014-01-01
Mindfulness meditation (MM) is an inward mental practice, in which a resting but alert state of mind is maintained. MM intervention was performed for a population of older people with high stress levels. This study assessed signal processing methodologies of electroencephalographic (EEG) and respiration signals during meditation and control condition to aid in quantification of the meditative state. EEG and respiration data were collected and analyzed on 34 novice meditators after a 6-week meditation intervention. Collected data were analyzed with spectral analysis and support vector machine classification to evaluate an objective marker for meditation. We observed meditation and control condition differences in the alpha, beta and theta frequency bands. Furthermore, we established a classifier using EEG and respiration signals with a higher accuracy at discriminating between meditation and control conditions than one using the EEG signal only. EEG and respiration based classifier is a viable objective marker for meditation ability. Future studies should quantify different levels of meditation depth and meditation experience using this classifier. Development of an objective physiological meditation marker will allow the mind-body medicine field to advance by strengthening rigor of methods. PMID:24748422
Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J
2008-04-01
Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.
Structural health monitoring of plates with surface features using guided ultrasonic waves
NASA Astrophysics Data System (ADS)
Fromme, P.
2009-03-01
Distributed array systems for guided ultrasonic waves offer an efficient way for the long-term monitoring of the structural integrity of large plate-like structures. The measurement concept involving baseline subtraction has been demonstrated under laboratory conditions. For the application to real technical structures it needs to be shown that the methodology works equally well in the presence of structural and surface features. Problems employing this structural health monitoring concept can occur due to the presence of additional changes in the signal reflected at undamaged parts of the structure. The influence of the signal processing parameters and transducer placement on the damage detection and localization accuracy is discussed. The use of permanently attached, distributed sensors for the A0 Lamb wave mode has been investigated. Results are presented using experimental data obtained from laboratory measurements and Finite Element simulated signals for a large steel plate with a welded stiffener.
NASA Astrophysics Data System (ADS)
Mueller, A. V.; Hemond, H.
2009-12-01
The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing only a single anion (Cl-) in addition to hydroxide, thus allowing charge neutrality to be easily and explicitly invoked. Calibration of every probe relative to each of the five cations present is undertaken, and resulting curves are used to create a representative environmental data set based on USGS data for New England waters. Signal processing methodologies, specifically artificial neural networks (ANNs), are extended to use a feedback architecture based on conductivity measurements and charge neutrality calculations. The algorithms are then tuned to optimize performance of the algorithm at predicting actual concentrations from these simulated signals. Results are compared to use of component probes as stand-alone sensors. Future extension of this instrument for multiple anions (including carbonate and bicarbonate, nitrate, and sulfate) will ultimately provide rapid, accurate field measurements of the entire charge balance of natural waters at high resolution, improving sampling abilities while reducing costs and errors related to transport and analysis of grab samples.
Signorini, Maria G; Fanelli, Andrea; Magenes, Giovanni
2014-01-01
Monitoring procedures are the basis to evaluate the clinical state of patients and to assess changes in their conditions, thus providing necessary interventions in time. Both these two objectives can be achieved by integrating technological development with methodological tools, thus allowing accurate classification and extraction of useful diagnostic information. The paper is focused on monitoring procedures applied to fetal heart rate variability (FHRV) signals, collected during pregnancy, in order to assess fetal well-being. The use of linear time and frequency techniques as well as the computation of non linear indices can contribute to enhancing the diagnostic power and reliability of fetal monitoring. The paper shows how advanced signal processing approaches can contribute to developing new diagnostic and classification indices. Their usefulness is evaluated by comparing two selected populations: normal fetuses and intra uterine growth restricted (IUGR) fetuses. Results show that the computation of different indices on FHRV signals, either linear and nonlinear, gives helpful indications to describe pathophysiological mechanisms involved in the cardiovascular and neural system controlling the fetal heart. As a further contribution, the paper briefly describes how the introduction of wearable systems for fetal ECG recording could provide new technological solutions improving the quality and usability of prenatal monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
2016-06-01
This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less
Neuroimaging of Human Balance Control: A Systematic Review
Wittenberg, Ellen; Thompson, Jessica; Nam, Chang S.; Franz, Jason R.
2017-01-01
This review examined 83 articles using neuroimaging modalities to investigate the neural correlates underlying static and dynamic human balance control, with aims to support future mobile neuroimaging research in the balance control domain. Furthermore, this review analyzed the mobility of the neuroimaging hardware and research paradigms as well as the analytical methodology to identify and remove movement artifact in the acquired brain signal. We found that the majority of static balance control tasks utilized mechanical perturbations to invoke feet-in-place responses (27 out of 38 studies), while cognitive dual-task conditions were commonly used to challenge balance in dynamic balance control tasks (20 out of 32 studies). While frequency analysis and event related potential characteristics supported enhanced brain activation during static balance control, that in dynamic balance control studies was supported by spatial and frequency analysis. Twenty-three of the 50 studies utilizing EEG utilized independent component analysis to remove movement artifacts from the acquired brain signals. Lastly, only eight studies used truly mobile neuroimaging hardware systems. This review provides evidence to support an increase in brain activation in balance control tasks, regardless of mechanical, cognitive, or sensory challenges. Furthermore, the current body of literature demonstrates the use of advanced signal processing methodologies to analyze brain activity during movement. However, the static nature of neuroimaging hardware and conventional balance control paradigms prevent full mobility and limit our knowledge of neural mechanisms underlying balance control. PMID:28443007
NASA Astrophysics Data System (ADS)
Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron
2014-03-01
Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mortar radiocarbon dating: preliminary accuracy evaluation of a novel methodology.
Marzaioli, Fabio; Lubritto, Carmine; Nonni, Sara; Passariello, Isabella; Capano, Manuela; Terrasi, Filippo
2011-03-15
Mortars represent a class of building and art materials that are widespread at archeological sites from the Neolithic period on. After about 50 years of experimentation, the possibility to evaluate their absolute chronology by means of radiocarbon ((14)C) remains still uncertain. With the use of a simplified mortar production process in the laboratory environment, this study shows the overall feasibility of a novel physical pretreatment for the isolation of the atmospheric (14)CO(2) (i.e., binder) signal absorbed by the mortars during their setting. This methodology is based on the assumption that an ultrasonic attack in liquid phase isolates a suspension of binder carbonates from bulk mortars. Isotopic ((13)C and (14)C), % C, X-ray diffractometry (XRD), and scanning electron microscopy (SEM) analyses were performed to characterize the proposed methodology. The applied protocol allows suppression of the fossil carbon (C) contamination originating from the incomplete burning of the limestone during the quick lime production, providing unbiased dating for "laboratory" mortars produced operating at historically adopted burning temperatures.
Imai, Takeshi; Hayakawa, Masayo; Ohe, Kazuhiko
2013-01-01
Prediction of synergistic or antagonistic effects of drug-drug interaction (DDI) in vivo has been of considerable interest over the years. Formal representation of pharmacological knowledge such as ontology is indispensable for machine reasoning of possible DDIs. However, current pharmacology knowledge bases are not sufficient to provide formal representation of DDI information. With this background, this paper presents: (1) a description framework of pharmacodynamics ontology; and (2) a methodology to utilize pharmacodynamics ontology to detect different types of possible DDI pairs with supporting information such as underlying pharmacodynamics mechanisms. We also evaluated our methodology in the field of drugs related to noradrenaline signal transduction process and 11 different types of possible DDI pairs were detected. The main features of our methodology are the explanation capability of the reason for possible DDIs and the distinguishability of different types of DDIs. These features will not only be useful for providing supporting information to prescribers, but also for large-scale monitoring of drug safety.
Assessing Similarity Among Individual Tumor Size Lesion Dynamics: The CICIL Methodology
Girard, Pascal; Ioannou, Konstantinos; Klinkhardt, Ute; Munafo, Alain
2018-01-01
Mathematical models of tumor dynamics generally omit information on individual target lesions (iTLs), and consider the most important variable to be the sum of tumor sizes (TS). However, differences in lesion dynamics might be predictive of tumor progression. To exploit this information, we have developed a novel and flexible approach for the non‐parametric analysis of iTLs, which integrates knowledge from signal processing and machine learning. We called this new methodology ClassIfication Clustering of Individual Lesions (CICIL). We used CICIL to assess similarities among the TS dynamics of 3,223 iTLs measured in 1,056 patients with metastatic colorectal cancer treated with cetuximab combined with irinotecan, in two phase II studies. We mainly observed similar dynamics among lesions within the same tumor site classification. In contrast, lesions in anatomic locations with different features showed different dynamics in about 35% of patients. The CICIL methodology has also been implemented in a user‐friendly and efficient Java‐based framework. PMID:29388396
Logical Modeling and Dynamical Analysis of Cellular Networks
Abou-Jaoudé, Wassim; Traynard, Pauline; Monteiro, Pedro T.; Saez-Rodriguez, Julio; Helikar, Tomáš; Thieffry, Denis; Chaouiya, Claudine
2016-01-01
The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle. PMID:27303434
Robust detection-isolation-accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.
1985-01-01
The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.
Huang, Weilin; Wang, Runqiu; Li, Huijian; Chen, Yangkang
2017-09-20
Microseismic method is an essential technique for monitoring the dynamic status of hydraulic fracturing during the development of unconventional reservoirs. However, one of the challenges in microseismic monitoring is that those seismic signals generated from micro seismicity have extremely low amplitude. We develop a methodology to unveil the signals that are smeared in the strong ambient noise and thus facilitate a more accurate arrival-time picking that will ultimately improve the localization accuracy. In the proposed technique, we decompose the recorded data into several morphological multi-scale components. In order to unveil weak signal, we propose an orthogonalization operator which acts as a time-varying weighting in the morphological reconstruction. The orthogonalization operator is obtained using an inversion process. This orthogonalized morphological reconstruction can be interpreted as a projection of the higher-dimensional vector. We first test the proposed technique using a synthetic dataset. Then the proposed technique is applied to a field dataset recorded in a project in China, in which the signals induced from hydraulic fracturing are recorded by twelve three-component (3-C) geophones in a monitoring well. The result demonstrates that the orthogonalized morphological reconstruction can make the extremely weak microseismic signals detectable.
In situ photoacoustic characterization for porous silicon growing: Detection principles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramirez-Gutierrez, C. F.; Licenciatura en Ingeniería Física, Facultad de Ingeniería, Universidad Autónoma de Querétaro, C. P. 76010 Querétaro, Qro.; Castaño-Yepes, J. D.
There are a few methodologies for monitoring the in-situ formation of Porous Silicon (PS). One of the methodologies is photoacoustic. Previous works that reported the use of photoacoustic to study the PS formation do not provide the physical explanation of the origin of the signal. In this paper, a physical explanation of the origin of the photoacoustic signal during the PS etching is provided. The incident modulated radiation and changes in the reflectance are taken as thermal sources. In this paper, a useful methodology is proposed to determine the etching rate, porosity, and refractive index of a PS film bymore » the determination of the sample thickness, using scanning electron microscopy images. This method was developed by carrying out two different experiments using the same anodization conditions. The first experiment consisted of growth of the samples with different etching times to prove the periodicity of the photoacoustic signal, while the second one considered the growth samples using three different wavelengths that are correlated with the period of the photoacoustic signal. The last experiment showed that the period of the photoacoustic signal is proportional to the laser wavelength.« less
NASA Astrophysics Data System (ADS)
Serra, Roger; Lopez, Lautaro
2018-05-01
Different approaches on the detection of damages based on dynamic measurement of structures have appeared in the last decades. They were based, amongst others, on changes in natural frequencies, modal curvatures, strain energy or flexibility. Wavelet analysis has also been used to detect the abnormalities on modal shapes induced by damages. However the majority of previous work was made with non-corrupted by noise signals. Moreover, the damage influence for each mode shape was studied separately. This paper proposes a new methodology based on combined modal wavelet transform strategy to cope with noisy signals, while at the same time, able to extract the relevant information from each mode shape. The proposed methodology will be then compared with the most frequently used and wide-studied methods from the bibliography. To evaluate the performance of each method, their capacity to detect and localize damage will be analyzed in different cases. The comparison will be done by simulating the oscillations of a cantilever steel beam with and without defect as a numerical case. The proposed methodology proved to outperform classical methods in terms of noisy signals.
DOT National Transportation Integrated Search
2017-05-01
In this project, Florida Atlantic University researchers developed a methodology and software tools that allow objective, quantitative analysis of the performance of signal systems. : The researchers surveyed the state of practice for traffic signal ...
Bustos, Alejandro; Rubio, Higinio; Castejón, Cristina; García-Prada, Juan Carlos
2018-03-06
An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation.
EMD-Based Methodology for the Identification of a High-Speed Train Running in a Gear Operating State
García-Prada, Juan Carlos
2018-01-01
An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation. PMID:29509690
Hydrogel design of experiments methodology to optimize hydrogel for iPSC-NPC culture.
Lam, Jonathan; Carmichael, S Thomas; Lowry, William E; Segura, Tatiana
2015-03-11
Bioactive signals can be incorporated in hydrogels to direct encapsulated cell behavior. Design of experiments methodology methodically varies the signals systematically to determine the individual and combinatorial effects of each factor on cell activity. Using this approach enables the optimization of three ligands concentrations (RGD, YIGSR, IKVAV) for the survival and differentiation of neural progenitor cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
Telephone-quality pathological speech classification using empirical mode decomposition.
Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S
2011-01-01
This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.
An ultrasonic methodology for muscle cross section measurement of support space flight
NASA Astrophysics Data System (ADS)
Hatfield, Thomas R.; Klaus, David M.; Simske, Steven J.
2004-09-01
The number one priority for any manned space mission is the health and safety of its crew. The study of the short and long term physiological effects on humans is paramount to ensuring crew health and mission success. One of the challenges associated in studying the physiological effects of space flight on humans, such as loss of bone and muscle mass, has been that of readily attaining the data needed to characterize the changes. The small sampling size of astronauts, together with the fact that most physiological data collection tends to be rather tedious, continues to hinder elucidation of the underlying mechanisms responsible for the observed changes that occur in space. Better characterization of the muscle loss experienced by astronauts requires that new technologies be implemented. To this end, we have begun to validate a 360° ultrasonic scanning methodology for muscle measurements and have performed empirical sampling of a limb surrogate for comparison. Ultrasonic wave propagation was simulated using 144 stations of rotated arm and calf MRI images. These simulations were intended to provide a preliminary check of the scanning methodology and data analysis before its implementation with hardware. Pulse-echo waveforms were processed for each rotation station to characterize fat, muscle, bone, and limb boundary interfaces. The percentage error between MRI reference values and calculated muscle areas, as determined from reflection points for calf and arm cross sections, was -2.179% and +2.129%, respectively. These successful simulations suggest that ultrasound pulse scanning can be used to effectively determine limb cross-sectional areas. Cross-sectional images of a limb surrogate were then used to simulate signal measurements at several rotation angles, with ultrasonic pulse-echo sampling performed experimentally at the same stations on the actual limb surrogate to corroborate the results. The objective of the surrogate sampling was to compare the signal output of the simulation tool used as a methodology validation for actual tissue signals. The disturbance patterns of the simulated and sampled waveforms were consistent. Although only discussed as a small part of the work presented, the sampling portion also helped identify important considerations such as tissue compression and transducer positioning for future work involving tissue scanning with this methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S; Larsen, S; Wagoner, J
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization of full three-dimensional (3D)more » finite difference modeling, as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project, in support of LLNL's national-security mission, benefits the U.S. military and intelligence community. Fiscal year (FY) 2003 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A three-seismic-array vehicle tracking testbed was installed on site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications. In FY03 specifically, a large and complex simulation experiment was conducted that tested the full modeling-based approach to geological characterization using E2D, the K-L statistical methodology, and matched field processing applied to tunnel detection with surface seismic sensors. The simulation validated the full methodology and the need for geological heterogeneity to be accounted for in the overall approach. The Lake Lynn site area was geologically modeled using the code Earthvision to produce a 32 million node 3D model grid for E3D. Model linking issues were resolved and a number of full 3D model runs were accomplished using shot locations that matched the data. E3D-generated wavefield movies showed the reflection signal would be too small to be observed in the data due to trapped and attenuated energy in the weathered layer. An analysis of the few sensors coupled to bedrock did not improve the reflection signal strength sufficiently because the shots, though buried, were within the surface layer and hence attenuated. Ability to model a complex 3D geological structure and calculate synthetic seismograms that are in good agreement with actual data (especially for surface waves and below the complex weathered layer) was demonstrated. We conclude that E3D is a powerful tool for assessing the conditions under which a tunnel could be detected in a specific geological setting. Finally, the Lake Lynn tunnel explosion data were analyzed using standard array processing techniques. The results showed that single detonations could be detected and located but simultaneous detonations would require a strategic placement of arrays.« less
Generalization of the Poincare sphere to process 2D displacement signals
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Lamberti, Luciano
2017-06-01
Traditionally the multiple phase method has been considered as an essential tool for phase information recovery. The in-quadrature phase method that theoretically is an alternative pathway to achieve the same goal failed in actual applications. The authors in a previous paper dealing with 1D signals have shown that properly implemented the in-quadrature method yields phase values with the same accuracy than the multiple phase method. The present paper extends the methodology developed in 1D to 2D. This extension is not a straight forward process and requires the introduction of a number of additional concepts and developments. The concept of monogenic function provides the necessary tools required for the extension process. The monogenic function has a graphic representation through the Poincare sphere familiar in the field of Photoelasticity and through the developments introduced in this paper connected to the analysis of displacement fringe patterns. The paper is illustrated with examples of application that show that multiple phases method and the in-quadrature are two aspects of the same basic theoretical model.
Time-varying bispectral analysis of visually evoked multi-channel EEG
NASA Astrophysics Data System (ADS)
Chandran, Vinod
2012-12-01
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
NASA Astrophysics Data System (ADS)
Traversaro, Francisco; O. Redelico, Francisco
2018-04-01
In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.
Zhang, Jiwei; Zhang, Yanmei; Zhong, Yaohua; Qu, Yinbo; Wang, Tianhong
2012-01-01
Background The model cellulolytic fungus Trichoderma reesei (teleomorph Hypocrea jecorina) is capable of responding to environmental cues to compete for nutrients in its natural saprophytic habitat despite its genome encodes fewer degradative enzymes. Efficient signalling pathways in perception and interpretation of environmental signals are indispensable in this process. Ras GTPases represent a kind of critical signal proteins involved in signal transduction and regulation of gene expression. In T. reesei the genome contains two Ras subfamily small GTPases TrRas1 and TrRas2 homologous to Ras1 and Ras2 from S. cerevisiae, but their functions remain unknown. Methodology/Principal Findings Here, we have investigated the roles of GTPases TrRas1 and TrRas2 during fungal morphogenesis and cellulase gene expression. We show that both TrRas1 and TrRas2 play important roles in some cellular processes such as polarized apical growth, hyphal branch formation, sporulation and cAMP level adjustment, while TrRas1 is more dominant in these processes. Strikingly, we find that TrRas2 is involved in modulation of cellulase gene expression. Deletion of TrRas2 results in considerably decreased transcription of cellulolytic genes upon growth on cellulose. Although the strain carrying a constitutively activated TrRas2G16V allele exhibits increased cellulase gene transcription, the cbh1 and cbh2 expression in this mutant still strictly depends on cellulose, indicating TrRas2 does not directly mediate the transmission of the cellulose signal. In addition, our data suggest that the effect of TrRas2 on cellulase gene is exerted through regulation of transcript abundance of cellulase transcription factors such as Xyr1, but the influence is independent of cAMP signalling pathway. Conclusions/Significance Together, these findings elucidate the functions for Ras signalling of T. reesei in cellular morphogenesis, especially in cellulase gene expression, which contribute to deciphering the powerful competitive ability of plant cell wall degrading fungi in nature. PMID:23152805
Imaging intraflagellar transport in mammalian primary cilia.
Besschetnova, Tatiana Y; Roy, Barnali; Shah, Jagesh V
2009-01-01
The primary cilium is a specialized organelle that projects from the surface of many cell types. Unlike its motile counterpart it cannot beat but does transduce extracellular stimuli into intracellular signals and acts as a specialized subcellular compartment. The cilium is built and maintained by the transport of proteins and other biomolecules into and out of this compartment. The trafficking machinery for the cilium is referred to as IFT or intraflagellar transport. It was originally identified in the green algae Chlamydomonas and has been discovered throughout the evolutionary tree. The IFT machinery is widely conserved and acts to establish, maintain, and disassemble cilia and flagella. Understanding the role of IFT in cilium signaling and regulation requires a methodology for observing it directly. Here we describe current methods for observing the IFT process in mammalian primary cilia through the generation of fluorescent protein fusions and their expression in ciliated cell lines. The observation protocol uses high-resolution time-lapse microscopy to provide detailed quantitative measurements of IFT particle velocities in wild-type cells or in the context of genetic or other perturbations. Direct observation of IFT trafficking will provide a unique tool to dissect the processes that govern cilium regulation and signaling. 2009 Elsevier Inc. All rights reserved.
Automatic detection of freezing of gait events in patients with Parkinson's disease.
Tripoliti, Evanthia E; Tzallas, Alexandros T; Tsipouras, Markos G; Rigas, George; Bougia, Panagiota; Leontiou, Michael; Konitsiotis, Spiros; Chondrogiorgi, Maria; Tsouli, Sofia; Fotiadis, Dimitrios I
2013-04-01
The aim of this study is to detect freezing of gait (FoG) events in patients suffering from Parkinson's disease (PD) using signals received from wearable sensors (six accelerometers and two gyroscopes) placed on the patients' body. For this purpose, an automated methodology has been developed which consists of four stages. In the first stage, missing values due to signal loss or degradation are replaced and then (second stage) low frequency components of the raw signal are removed. In the third stage, the entropy of the raw signal is calculated. Finally (fourth stage), four classification algorithms have been tested (Naïve Bayes, Random Forests, Decision Trees and Random Tree) in order to detect the FoG events. The methodology has been evaluated using several different configurations of sensors in order to conclude to the set of sensors which can produce optimal FoG episode detection. Signals recorded from five healthy subjects, five patients with PD who presented the symptom of FoG and six patients who suffered from PD but they do not present FoG events. The signals included 93 FoG events with 405.6s total duration. The results indicate that the proposed methodology is able to detect FoG events with 81.94% sensitivity, 98.74% specificity, 96.11% accuracy and 98.6% area under curve (AUC) using the signals from all sensors and the Random Forests classification algorithm. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Vital physical signals measurements using a webcam
NASA Astrophysics Data System (ADS)
Ouyang, Jianfei; Yan, Yonggang; Yao, Lifeng
2013-10-01
Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. In this paper, we provide a new video-based methodology for remote and fast measurements of vital physical signals such as cardiac pulse and breathing rate. A webcam is used to track color video of a human face or wrist, and a Photoplethysmography (PPG) technique is applied to perform the measurements of the vital signals. A novel sequential blind signal extraction methodology is applied to the color video under normal lighting conditions, based on correlation analysis between the green trace and the source signals. The approach is successfully applied in the measurement of vital signals under the condition of different illuminating in which the target signal can also be found out accurately. To assess the advantages, the measuring time of a large number of cases is recorded correctly. The experimental results show that it only takes less than 30 seconds to measure the vital physical signals using presented technique. The study indicates the proposed approach is feasible for PPG technique, which provides a way to study the relationship of the signal for different ROI in future research.
Modeling Nonlinear Errors in Surface Electromyography Due To Baseline Noise: A New Methodology
Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith
2010-01-01
The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, “true” EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2 – 40 % maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low. PMID:20869716
Determination of smoke plume and layer heights using scanning lidar data
Vladimir A. Kovalev; Alexander Petkov; Cyle Wold; Shawn Urbanski; Wei Min Hao
2009-01-01
The methodology of using mobile scanning lidar data for investigation of smoke plume rise and high-resolution smoke dispersion is considered. The methodology is based on the lidar-signal transformation proposed recently [Appl. Opt. 48, 2559 (2009)]. In this study, similar methodology is used to create the atmospheric heterogeneity height indicator (HHI...
High-rate RTK and PPP multi-GNSS positioning for small-scale dynamic displacements monitoring
NASA Astrophysics Data System (ADS)
Paziewski, Jacek; Sieradzki, Rafał; Baryła, Radosław; Wielgosz, Pawel
2017-04-01
The monitoring of dynamic displacements and deformations of engineering structures such as buildings, towers and bridges is of great interest due to several practical and theoretical reasons. The most important is to provide information required for safe maintenance of the constructions. High temporal resolution and precision of GNSS observations predestine this technology to be applied to most demanding application in terms of accuracy, availability and reliability. GNSS technique supported by appropriate processing methodology may meet the specific demands and requirements of ground and structures monitoring. Thus, high-rate multi-GNSS signals may be used as reliable source of information on dynamic displacements of ground and engineering structures, also in real time applications. In this study we present initial results of application of precise relative GNSS positioning for detection of small scale (cm level) high temporal resolution dynamic displacements. Methodology and algorithms applied in self-developed software allowing for relative positioning using high-rate dual-frequency phase and pseudorange GPS+Galileo observations are also given. Additionally, an approach was also made to use the Precise Point Positioning technique to such application. In the experiment were used the observations obtained from high-rate (20 Hz) geodetic receivers. The dynamic displacements were simulated using specially constructed device moving GNSS antenna with dedicated amplitude and frequency. The obtained results indicate on possibility of detection of dynamic displacements of the GNSS antenna even at the level of few millimetres using both relative and Precise Point Positioning techniques after suitable signals processing.
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.
Sun, Junfeng; Li, Zhijun; Tong, Shanbao
2012-01-01
Functional neural connectivity is drawing increasing attention in neuroscience research. To infer functional connectivity from observed neural signals, various methods have been proposed. Among them, phase synchronization analysis is an important and effective one which examines the relationship of instantaneous phase between neural signals but neglecting the influence of their amplitudes. In this paper, we review the advances in methodologies of phase synchronization analysis. In particular, we discuss the definitions of instantaneous phase, the indexes of phase synchronization and their significance test, the issues that may affect the detection of phase synchronization and the extensions of phase synchronization analysis. In practice, phase synchronization analysis may be affected by observational noise, insufficient samples of the signals, volume conduction, and reference in recording neural signals. We make comments and suggestions on these issues so as to better apply phase synchronization analysis to inferring functional connectivity from neural signals. PMID:22577470
Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.
Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein
2017-12-13
As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.
Lumaca, Massimo; Baggio, Giosuè
2018-01-01
In their commentary on our work, Ravignani and Verhoef (2018) raise concerns about two methodological aspects of our experimental paradigm (the signaling game): (1) the use of melodic signals of fixed length and duration, and (2) the fact that signals are endowed with meaning. They argue that music is hardly a semantic system and that our methodological choices may limit the capacity of our paradigm to shed light on the emergence and evolution of a number of putative musical universals. We reply that musical systems are semantic systems and that the aim of our research is not to study musical universals as such, but to compare more closely the kinds of principles that organize meaning and structure in linguistic and musical systems.
CT radiation profile width measurement using CR imaging plate raw data
Yang, Chang‐Ying Joseph
2015-01-01
This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559
NASA Astrophysics Data System (ADS)
Pal, Alok Ranjan; Saha, Diganta; Dash, Niladri Sekhar; Pal, Antara
2018-05-01
An attempt is made in this paper to report how a supervised methodology has been adopted for the task of word sense disambiguation in Bangla with necessary modifications. At the initial stage, the Naïve Bayes probabilistic model that has been adopted as a baseline method for sense classification, yields moderate result with 81% accuracy when applied on a database of 19 (nineteen) most frequently used Bangla ambiguous words. On experimental basis, the baseline method is modified with two extensions: (a) inclusion of lemmatization process into of the system, and (b) bootstrapping of the operational process. As a result, the level of accuracy of the method is slightly improved up to 84% accuracy, which is a positive signal for the whole process of disambiguation as it opens scope for further modification of the existing method for better result. The data sets that have been used for this experiment include the Bangla POS tagged corpus obtained from the Indian Languages Corpora Initiative, and the Bangla WordNet, an online sense inventory developed at the Indian Statistical Institute, Kolkata. The paper also reports about the challenges and pitfalls of the work that have been closely observed and addressed to achieve expected level of accuracy.
NASA Astrophysics Data System (ADS)
Huang, Weilin; Wang, Runqiu; Chen, Yangkang
2018-05-01
Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.
Learning physical descriptors for materials science by compressed sensing
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Vybiral, Jan; Ahmetcik, Emre; Ouyang, Runhai; Levchenko, Sergey V.; Draxl, Claudia; Scheffler, Matthias
2017-02-01
The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.
Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.
Alexiadis, Dimitrios S; Sergiadis, George D
2007-01-01
Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.
Gear fault diagnosis based on the structured sparsity time-frequency analysis
NASA Astrophysics Data System (ADS)
Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong
2018-03-01
Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.
Versatile Aggressive Mimicry of Cicadas by an Australian Predatory Katydid
Marshall, David C.; Hill, Kathy B. R.
2009-01-01
Background In aggressive mimicry, a predator or parasite imitates a signal of another species in order to exploit the recipient of the signal. Some of the most remarkable examples of aggressive mimicry involve exploitation of a complex signal-response system by an unrelated predator species. Methodology/Principal Findings We have found that predatory Chlorobalius leucoviridis katydids (Orthoptera: Tettigoniidae) can attract male cicadas (Hemiptera: Cicadidae) by imitating the species-specific wing-flick replies of sexually receptive female cicadas. This aggressive mimicry is accomplished both acoustically, with tegminal clicks, and visually, with synchronized body jerks. Remarkably, the katydids respond effectively to a variety of complex, species-specific Cicadettini songs, including songs of many cicada species that the predator has never encountered. Conclusions/Significance We propose that the versatility of aggressive mimicry in C. leucoviridis is accomplished by exploiting general design elements common to the songs of many acoustically signaling insects that use duets in pair-formation. Consideration of the mechanism of versatile mimicry in C. leucoviridis may illuminate processes driving the evolution of insect acoustic signals, which play a central role in reproductive isolation of populations and the formation of species. PMID:19142230
A Review of Classification Techniques of EMG Signals during Isotonic and Isometric Contractions
Nazmi, Nurhazimah; Abdul Rahman, Mohd Azizi; Yamamoto, Shin-Ichiroh; Ahmad, Siti Anom; Zamzuri, Hairi; Mazlan, Saiful Amri
2016-01-01
In recent years, there has been major interest in the exposure to physical therapy during rehabilitation. Several publications have demonstrated its usefulness in clinical/medical and human machine interface (HMI) applications. An automated system will guide the user to perform the training during rehabilitation independently. Advances in engineering have extended electromyography (EMG) beyond the traditional diagnostic applications to also include applications in diverse areas such as movement analysis. This paper gives an overview of the numerous methods available to recognize motion patterns of EMG signals for both isotonic and isometric contractions. Various signal analysis methods are compared by illustrating their applicability in real-time settings. This paper will be of interest to researchers who would like to select the most appropriate methodology in classifying motion patterns, especially during different types of contractions. For feature extraction, the probability density function (PDF) of EMG signals will be the main interest of this study. Following that, a brief explanation of the different methods for pre-processing, feature extraction and classifying EMG signals will be compared in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above. PMID:27548165
Toward Expanding Tremor Observations in the Northern San Andreas Fault System in the 1990s
NASA Astrophysics Data System (ADS)
Damiao, L. G.; Dreger, D. S.; Nadeau, R. M.; Taira, T.; Guilhem, A.; Luna, B.; Zhang, H.
2015-12-01
The connection between tremor activity and active fault processes continues to expand our understanding of deep fault zone properties and deformation, the tectonic process, and the relationship of tremor to the occurrence of larger earthquakes. Compared to tremors in subduction zones, known tremor signals in California are ~5 to ~10 smaller in amplitude and duration. These characteristics, in addition to scarce geographic coverage, lack of continuous data (e.g., before mid-2001 at Parkfield), and absence of instrumentation sensitive enough to monitor these events have stifled tremor detection. The continuous monitoring of these events over a relatively short time period in limited locations may lead to a parochial view of the tremor phenomena and its relationship to fault, tectonic, and earthquake processes. To help overcome this, we have embarked on a project to expand the geographic and temporal scope of tremor observation along the Northern SAF system using available continuous seismic recordings from a broad array of 100s of surface seismic stations from multiple seismic networks. Available data for most of these stations also extends back into the mid-1990s. Processing and analysis of tremor signal from this large and low signal-to-noise dataset requires a heavily automated, data-science type approach and specialized techniques for identifying and extracting reliable data. We report here on the automated, envelope based methodology we have developed. We finally compare our catalog results with pre-existing tremor catalogs in the Parkfield area.
Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy
Barth, Connor W.; Gibbs, Summer L.
2017-01-01
Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy. PMID:28255352
Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy.
Barth, Connor W; Gibbs, Summer L
2017-01-01
Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
NASA Astrophysics Data System (ADS)
Chang, S. S. L.
State of the art technology in circuits, fields, and electronics is discussed. The principles and applications of these technologies to industry, digital processing, microwave semiconductors, and computer-aided design are explained. Important concepts and methodologies in mathematics and physics are reviewed, and basic engineering sciences and associated design methods are dealt with, including: circuit theory and the design of magnetic circuits and active filter synthesis; digital signal processing, including FIR and IIR digital filter design; transmission lines, electromagnetic wave propagation and surface acoustic wave devices. Also considered are: electronics technologies, including power electronics, microwave semiconductors, GaAs devices, and magnetic bubble memories; digital circuits and logic design.
Narayanan, Shrikanth; Georgiou, Panayiotis G
2013-02-07
The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.
Brain Dynamics: Methodological Issues and Applications in Psychiatric and Neurologic Diseases
NASA Astrophysics Data System (ADS)
Pezard, Laurent
The human brain is a complex dynamical system generating the EEG signal. Numerical methods developed to study complex physical dynamics have been used to characterize EEG since the mid-eighties. This endeavor raised several issues related to the specificity of EEG. Firstly, theoretical and methodological studies should address the major differences between the dynamics of the human brain and physical systems. Secondly, this approach of EEG signal should prove to be relevant for dealing with physiological or clinical problems. A set of studies performed in our group is presented here within the context of these two problematic aspects. After the discussion of methodological drawbacks, we review numerical simulations related to the high dimension and spatial extension of brain dynamics. Experimental studies in neurologic and psychiatric disease are then presented. We conclude that if it is now clear that brain dynamics changes in relation with clinical situations, methodological problems remain largely unsolved.
Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B
2018-05-30
Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.
A new proof of the generalized Hamiltonian–Real calculus
Gao, Hua; Mandic, Danilo P.
2016-01-01
The recently introduced generalized Hamiltonian–Real (GHR) calculus comprises, for the first time, the product and chain rules that makes it a powerful tool for quaternion-based optimization and adaptive signal processing. In this paper, we introduce novel dual relationships between the GHR calculus and multivariate real calculus, in order to provide a new, simpler proof of the GHR derivative rules. This further reinforces the theoretical foundation of the GHR calculus and provides a convenient methodology for generic extensions of real- and complex-valued learning algorithms to the quaternion domain.
PepsNMR for 1H NMR metabolomic data pre-processing.
Martin, Manon; Legat, Benoît; Leenders, Justine; Vanwinsberghe, Julien; Rousseau, Réjane; Boulanger, Bruno; Eilers, Paul H C; De Tullio, Pascal; Govaerts, Bernadette
2018-08-17
In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1 H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines. Copyright © 2018 Elsevier B.V. All rights reserved.
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, T. W.; Ting, C.F.; Qu, Jun
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish differentmore » states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.« less
A method to preserve trends in quantile mapping bias correction of climate modeled temperature
NASA Astrophysics Data System (ADS)
Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-09-01
Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).
Dynamic systems and inferential information processing in human communication.
Grammer, Karl; Fink, Bernhard; Renninger, LeeAnn
2002-12-01
Research in human communication on an ethological basis is almost obsolete. The reasons for this are manifold and lie partially in methodological problems connected to the observation and description of behavior, as well as the nature of human behavior itself. In this chapter, we present a new, non-intrusive, technical approach to the analysis of human non-verbal behavior, which could help to solve the problem of categorization that plagues the traditional approaches. We utilize evolutionary theory to propose a new theory-driven methodological approach to the 'multi-unit multi-channel modulation' problem of human nonverbal communication. Within this concept, communication is seen as context-dependent (the meaning of a signal is adapted to the situation), as a multi-channel and a multi-unit process (a string of many events interrelated in 'communicative' space and time), and as related to the function it serves. Such an approach can be utilized to successfully bridge the gap between evolutionary psychological research, which focuses on social cognition adaptations, and human ethology, which describes every day behavior in an objective, systematic way.
Oliveira, Bárbara L; Godinho, Daniela; O'Halloran, Martin; Glavin, Martin; Jones, Edward; Conceição, Raquel C
2018-05-19
Currently, breast cancer often requires invasive biopsies for diagnosis, motivating researchers to design and develop non-invasive and automated diagnosis systems. Recent microwave breast imaging studies have shown how backscattered signals carry relevant information about the shape of a tumour, and tumour shape is often used with current imaging modalities to assess malignancy. This paper presents a comprehensive analysis of microwave breast diagnosis systems which use machine learning to learn characteristics of benign and malignant tumours. The state-of-the-art, the main challenges still to overcome and potential solutions are outlined. Specifically, this work investigates the benefit of signal pre-processing on diagnostic performance, and proposes a new set of extracted features that capture the tumour shape information embedded in a signal. This work also investigates if a relationship exists between the antenna topology in a microwave system and diagnostic performance. Finally, a careful machine learning validation methodology is implemented to guarantee the robustness of the results and the accuracy of performance evaluation.
Clinical measurements analysis of multi-spectral photoplethysmograph biosensors
NASA Astrophysics Data System (ADS)
Asare, Lasma; Kviesis-Kipge, Edgars; Spigulis, Janis
2014-05-01
The developed portable multi-spectral photoplethysmograph (MS-PPG) optical biosensor device, intended for analysis of peripheral blood volume pulsations at different vascular depths, has been clinically verified. Multi-spectral monitoring was performed by means of a four - wavelengths (454 nm, 519 nm, 632 nm and 888 nm) light emitted diodes and photodiode with multi-channel signal output processing. Two such sensors can be operated in parallel and imposed on the patient's skin. The clinical measurements confirmed ability to detect PPG signals at four wavelengths simultaneously and to record temporal differences in the signal shapes (corresponding to different penetration depths) in normal and pathological skin. This study analyzed wavelengths relations between systole and diastole peak difference at various tissue depths in normal and pathological skin. The difference between parameters of healthy and pathological skin at various skin depths could be explain by oxy- and deoxyhemoglobin dominance at different wavelengths operated in sensor. The proposed methodology and potential clinical applications in dermatology for skin assessment are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiev, Marat; Yang, Jie; Adams, Joseph
2007-11-29
Protein kinases catalyze the transfer of the γ-phosphoryl group from ATP, a key regulatory process governing signalling pathways in eukaryotic cells. The structure of the active site in these enzymes is highly conserved implying common catalytic mechanism. In this work we investigate the reaction process in cAPK protein kinase (PKA) using a combined quantum mechanics and molecular mechanics approach. The novel computational features of our work include reaction pathway determination with nudged elastic band methodology and calculation of free energy profiles of the reaction process taking into account finite temperature fluctuations of the protein environment. We find that the transfermore » of the γ-phosphoryl group in the protein environment is an exothermic reaction with the reaction barrier of 15 kcal/mol.« less
Orini, M; Laguna, P; Mainardi, L T; Bailón, R
2012-03-01
In this study, a framework for the characterization of the dynamic interactions between RR variability (RRV) and systolic arterial pressure variability (SAPV) is proposed. The methodology accounts for the intrinsic non-stationarity of the cardiovascular system and includes the assessment of both the strength and the prevalent direction of local coupling. The smoothed pseudo-Wigner-Ville distribution (SPWVD) is used to estimate the time-frequency (TF) power, coherence, and phase-difference spectra with fine TF resolution. The interactions between the signals are quantified by time-varying indices, including the local coupling, phase differences, time delay, and baroreflex sensitivity (BRS). Every index is extracted from a specific TF region, localized by combining information from the different spectra. In 14 healthy subjects, a head-up tilt provoked an abrupt decrease in the cardiovascular coupling; a rapid change in the phase difference (from 0.37 ± 0.23 to -0.27 ± 0.22 rad) and time delay (from 0.26 ± 0.14 to -0.16 ± 0.16 s) in the high-frequency band; and a decrease in the BRS (from 23.72 ± 7.66 to 6.92 ± 2.51 ms mmHg(-1)). In the low-frequency range, during a head-up tilt, restoration of the baseline level of cardiovascular coupling took about 2 min and SAPV preceded RRV by about 0.85 s during the whole test. The analysis of the Eurobavar data set, which includes subjects with intact as well as impaired baroreflex, showed that the presented methodology represents an improved TF generalization of traditional time-invariant methodologies and can reveal dysfunctions in subjects with baroreflex impairment. Additionally, the results also suggest the use of non-stationary signal-processing techniques to analyze signals recorded under conditions that are usually supposed to be stationary.
Methodology for qualitative uncertainty assessment of climate impact indicators
NASA Astrophysics Data System (ADS)
Otto, Juliane; Keup-Thiel, Elke; Rechid, Diana; Hänsler, Andreas; Pfeifer, Susanne; Roth, Ellinor; Jacob, Daniela
2016-04-01
The FP7 project "Climate Information Portal for Copernicus" (CLIPC) is developing an integrated platform of climate data services to provide a single point of access for authoritative scientific information on climate change and climate change impacts. In this project, the Climate Service Center Germany (GERICS) has been in charge of the development of a methodology on how to assess the uncertainties related to climate impact indicators. Existing climate data portals mainly treat the uncertainties in two ways: Either they provide generic guidance and/or express with statistical measures the quantifiable fraction of the uncertainty. However, none of the climate data portals give the users a qualitative guidance how confident they can be in the validity of the displayed data. The need for such guidance was identified in CLIPC user consultations. Therefore, we aim to provide an uncertainty assessment that provides the users with climate impact indicator-specific guidance on the degree to which they can trust the outcome. We will present an approach that provides information on the importance of different sources of uncertainties associated with a specific climate impact indicator and how these sources affect the overall 'degree of confidence' of this respective indicator. To meet users requirements in the effective communication of uncertainties, their feedback has been involved during the development process of the methodology. Assessing and visualising the quantitative component of uncertainty is part of the qualitative guidance. As visual analysis method, we apply the Climate Signal Maps (Pfeifer et al. 2015), which highlight only those areas with robust climate change signals. Here, robustness is defined as a combination of model agreement and the significance of the individual model projections. Reference Pfeifer, S., Bülow, K., Gobiet, A., Hänsler, A., Mudelsee, M., Otto, J., Rechid, D., Teichmann, C. and Jacob, D.: Robustness of Ensemble Climate Projections Analyzed with Climate Signal Maps: Seasonal and Extreme Precipitation for Germany, Atmosphere (Basel)., 6(5), 677-698, doi:10.3390/atmos6050677, 2015.
Data mining of atmospheric parameters associated with coastal earthquakes
NASA Astrophysics Data System (ADS)
Cervone, Guido
Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to study precursory signals associated with earthquakes. CQuake can be used for the retrospective analysis of past earthquakes, and for detecting early warning information about impending events. Using CQuake more than 300 earthquakes have been analyzed. In the case of coastal earthquakes with magnitude larger than 5.0, prominent anomalies are found up to two weeks prior to the main event. In case of earthquakes occurring away from the coast, no strong anomaly is detected. The identified anomalies provide a potentially reliable mean to mitigate earthquake risks in the future, and can be used to develop a fully operational forecasting system.
NASA Astrophysics Data System (ADS)
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Sanchez-Lucas, Rosa; Mehta, Angela; Valledor, Luis; Cabello-Hurtado, Francisco; Romero-Rodrıguez, M Cristina; Simova-Stoilova, Lyudmila; Demir, Sekvan; Rodriguez-de-Francisco, Luis E; Maldonado-Alconada, Ana M; Jorrin-Prieto, Ana L; Jorrín-Novo, Jesus V
2016-03-01
The present review is an update of the previous one published in Proteomics 2015 Reviews special issue [Jorrin-Novo, J. V. et al., Proteomics 2015, 15, 1089-1112] covering the July 2014-2015 period. It has been written on the bases of the publications that appeared in Proteomics journal during that period and the most relevant ones that have been published in other high-impact journals. Methodological advances and the contribution of the field to the knowledge of plant biology processes and its translation to agroforestry and environmental sectors will be discussed. This review has been organized in four blocks, with a starting general introduction (literature survey) followed by sections focusing on the methodology (in vitro, in vivo, wet, and dry), proteomics integration with other approaches (systems biology and proteogenomics), biological information, and knowledge (cell communication, receptors, and signaling), ending with a brief mention of some other biological and translational topics to which proteomics has made some contribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resting-State Functional Connectivity in the Infant Brain: Methods, Pitfalls, and Potentiality.
Mongerson, Chandler R L; Jennings, Russell W; Borsook, David; Becerra, Lino; Bajic, Dusica
2017-01-01
Early brain development is characterized by rapid growth and perpetual reconfiguration, driven by a dynamic milieu of heterogeneous processes. Postnatal brain plasticity is associated with increased vulnerability to environmental stimuli. However, little is known regarding the ontogeny and temporal manifestations of inter- and intra-regional functional connectivity that comprise functional brain networks. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a promising non-invasive neuroinvestigative tool, measuring spontaneous fluctuations in blood oxygen level dependent (BOLD) signal at rest that reflect baseline neuronal activity. Over the past decade, its application has expanded to infant populations providing unprecedented insight into functional organization of the developing brain, as well as early biomarkers of abnormal states. However, many methodological issues of rs-fMRI analysis need to be resolved prior to standardization of the technique to infant populations. As a primary goal, this methodological manuscript will (1) present a robust methodological protocol to extract and assess resting-state networks in early infancy using independent component analysis (ICA), such that investigators without previous knowledge in the field can implement the analysis and reliably obtain viable results consistent with previous literature; (2) review the current methodological challenges and ethical considerations associated with emerging field of infant rs-fMRI analysis; and (3) discuss the significance of rs-fMRI application in infants for future investigations of neurodevelopment in the context of early life stressors and pathological processes. The overarching goal is to catalyze efforts toward development of robust, infant-specific acquisition, and preprocessing pipelines, as well as promote greater transparency by researchers regarding methods used.
Resting-State Functional Connectivity in the Infant Brain: Methods, Pitfalls, and Potentiality
Mongerson, Chandler R. L.; Jennings, Russell W.; Borsook, David; Becerra, Lino; Bajic, Dusica
2017-01-01
Early brain development is characterized by rapid growth and perpetual reconfiguration, driven by a dynamic milieu of heterogeneous processes. Postnatal brain plasticity is associated with increased vulnerability to environmental stimuli. However, little is known regarding the ontogeny and temporal manifestations of inter- and intra-regional functional connectivity that comprise functional brain networks. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a promising non-invasive neuroinvestigative tool, measuring spontaneous fluctuations in blood oxygen level dependent (BOLD) signal at rest that reflect baseline neuronal activity. Over the past decade, its application has expanded to infant populations providing unprecedented insight into functional organization of the developing brain, as well as early biomarkers of abnormal states. However, many methodological issues of rs-fMRI analysis need to be resolved prior to standardization of the technique to infant populations. As a primary goal, this methodological manuscript will (1) present a robust methodological protocol to extract and assess resting-state networks in early infancy using independent component analysis (ICA), such that investigators without previous knowledge in the field can implement the analysis and reliably obtain viable results consistent with previous literature; (2) review the current methodological challenges and ethical considerations associated with emerging field of infant rs-fMRI analysis; and (3) discuss the significance of rs-fMRI application in infants for future investigations of neurodevelopment in the context of early life stressors and pathological processes. The overarching goal is to catalyze efforts toward development of robust, infant-specific acquisition, and preprocessing pipelines, as well as promote greater transparency by researchers regarding methods used. PMID:28856131
Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San
2017-04-01
Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical process control: separating signal from noise in emergency department operations.
Pimentel, Laura; Barrueto, Fermin
2015-05-01
Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.
Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N
2005-06-20
The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.
LED traffic signal management system : final report.
DOT National Transportation Integrated Search
2016-06-01
This research originated from the opportunity to develop a methodology to assess when LED (Light Emitting Diode) traffic signal modules begin to fail to meet the Institute of Transportation Engineers (ITE) performance specification for luminous inten...
Real-time plasma control in a dual-frequency, confined plasma etcher
NASA Astrophysics Data System (ADS)
Milosavljević, V.; Ellingboe, A. R.; Gaman, C.; Ringwood, J. V.
2008-04-01
The physics issues of developing model-based control of plasma etching are presented. A novel methodology for incorporating real-time model-based control of plasma processing systems is developed. The methodology is developed for control of two dependent variables (ion flux and chemical densities) by two independent controls (27 MHz power and O2 flow). A phenomenological physics model of the nonlinear coupling between the independent controls and the dependent variables of the plasma is presented. By using a design of experiment, the functional dependencies of the response surface are determined. In conjunction with the physical model, the dependencies are used to deconvolve the sensor signals onto the control inputs, allowing compensation of the interaction between control paths. The compensated sensor signals and compensated set-points are then used as inputs to proportional-integral-derivative controllers to adjust radio frequency power and oxygen flow to yield the desired ion flux and chemical density. To illustrate the methodology, model-based real-time control is realized in a commercial semiconductor dielectric etch chamber. The two radio frequency symmetric diode operates with typical commercial fluorocarbon feed-gas mixtures (Ar/O2/C4F8). Key parameters for dielectric etching are known to include ion flux to the surface and surface flux of oxygen containing species. Control is demonstrated using diagnostics of electrode-surface ion current, and chemical densities of O, O2, and CO measured by optical emission spectrometry and/or mass spectrometry. Using our model-based real-time control, the set-point tracking accuracy to changes in chemical species density and ion flux is enhanced.
A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification
NASA Astrophysics Data System (ADS)
Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.
MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.
Early Warning Signals of Ecological Transitions: Methods for Spatial Patterns
Brock, William A.; Carpenter, Stephen R.; Ellison, Aaron M.; Livina, Valerie N.; Seekell, David A.; Scheffer, Marten; van Nes, Egbert H.; Dakos, Vasilis
2014-01-01
A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data. PMID:24658137
NASA Technical Reports Server (NTRS)
Potter, Christopher
2013-01-01
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) methodology was applied to detected changes in perennial vegetation cover at marshland sites in Northern California reported to have undergone restoration between 1999 and 2009. Results showed extensive contiguous areas of restored marshland plant cover at 10 of the 14 sites selected. Gains in either woody shrub cover and/or from recovery of herbaceous cover that remains productive and evergreen on a year-round basis could be mapped out from the image results. However, LEDAPS may not be highly sensitive changes in wetlands that have been restored mainly with seasonal herbaceous cover (e.g., vernal pools), due to the ephemeral nature of the plant greenness signal. Based on this evaluation, the LEDAPS methodology would be capable of fulfilling a pressing need for consistent, continual, low-cost monitoring of changes in marshland ecosystems of the Pacific Flyway.
Seismic Characterization of EGS Reservoirs
NASA Astrophysics Data System (ADS)
Templeton, D. C.; Pyle, M. L.; Matzel, E.; Myers, S.; Johannesson, G.
2014-12-01
To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance the traditional microearthquake detection and location methodologies at two EGS systems. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP are typically smaller magnitude events or events that occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event seismic location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation could be real or simply within the anticipated error range. We apply this methodology to the Basel EGS data set and compare it to another EGS dataset. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class ("tuna" or "no-tuna") and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed.
Methodological comparison on OLED and OLET fabrication
NASA Astrophysics Data System (ADS)
Suppiah, Sarveshvaran; Hambali, Nor Azura Malini Ahmad; Wahid, Mohamad Halim Abd; Retnasamy, Vithyacharan; Shahimin, Mukhzeer Mohamad
2018-02-01
The potential of organic semiconductor devices for light generation is demonstrated by the commercialization of display technologies based on organic light emitting diode (OLED). In OLED, organic materials play the role of light emission once the current is passed through. However, OLED do have major drawbacks whereby it suffers from photon loss and exciton quenching. Organic light emitting transistor (OLET) emerged as the new technology to compensate the efficiency and brightness loss encountered in OLED. The structure has combinational capability to switch the electronic signal such as the field effect transistor (FET) as well as light generation. The aim of this study is to methodologically compare and contrast fabrication process and evaluate feasibility of both organic light emitting diode (OLED) and organic light emitting transistor (OLET). The proposed light emitting layer in this study is poly [2-methoxy-5- (2'-ethyl-hexyloxy)-1,4-phenylene vinylene] (MEH-PPV).
Ogirala, Ajay; Stachel, Joshua R; Mickle, Marlin H
2011-11-01
Increasing density of wireless communication and development of radio frequency identification (RFID) technology in particular have increased the susceptibility of patients equipped with cardiac rhythmic monitoring devices (CRMD) to environmental electro magnetic interference (EMI). Several organizations reported observing CRMD EMI from different sources. This paper focuses on mathematically analyzing the energy as perceived by the implanted device, i.e., voltage. Radio frequency (RF) energy transmitted by RFID interrogators is considered as an example. A simplified front-end equivalent circuit of a CRMD sensing circuitry is proposed for the analysis following extensive black-box testing of several commercial pacemakers and implantable defibrillators. After careful understanding of the mechanics of the CRMD signal processing in identifying the QRS complex of the heart-beat, a mitigation technique is proposed. The mitigation methodology introduced in this paper is logical in approach, simple to implement and is therefore applicable to all wireless communication protocols.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A.; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class (“tuna” or “no-tuna”) and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed. PMID:28152032
Mitochondrial dysfunction and sarcopenia of aging: from signaling pathways to clinical trials
Marzetti, Emanuele; Calvani, Riccardo; Cesari, Matteo; Buford, Thomas W.; Lorenzi, Maria; Behnke, Bradley J.; Leeuwenburgh, Christiaan
2013-01-01
Sarcopenia, the age-related loss of muscle mass and function, imposes a dramatic burden on individuals and society. The development of preventive and therapeutic strategies against sarcopenia is therefore perceived as an urgent need by health professionals and has instigated intensive research on the pathophysiology of this syndrome. The pathogenesis of sarcopenia is multifaceted and encompasses lifestyle habits, systemic factors (e.g., chronic inflammation and hormonal alterations), local environment perturbations (e.g., vascular dysfunction), and intramuscular specific processes. In this scenario, derangements in skeletal myocyte mitochondrial function are recognized as major factors contributing to the age-dependent muscle degeneration. In this review, we summarize prominent findings and controversial issues on the contribution of specific mitochondrial processes – including oxidative stress, quality control mechanisms and apoptotic signaling – on the development of sarcopenia. Extramuscular alterations accompanying the aging process with a potential impact on myocyte mitochondrial function are also discussed. We conclude with presenting methodological and safety considerations for the design of clinical trials targeting mitochondrial dysfunction to treat sarcopenia. Special emphasis is placed on the importance of monitoring the effects of an intervention on muscle mitochondrial function and identifying the optimal target population for the trial. PMID:23845738
Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto
2007-01-01
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Fozooni, Tahereh; Ravan, Hadi; Sasan, Hosseinali
2017-12-01
Due to their unique properties, such as programmability, ligand-binding capability, and flexibility, nucleic acids can serve as analytes and/or recognition elements for biosensing. To improve the sensitivity of nucleic acid-based biosensing and hence the detection of a few copies of target molecule, different modern amplification methodologies, namely target-and-signal-based amplification strategies, have already been developed. These recent signal amplification technologies, which are capable of amplifying the signal intensity without changing the targets' copy number, have resulted in fast, reliable, and sensitive methods for nucleic acid detection. Working in cell-free settings, researchers have been able to optimize a variety of complex and quantitative methods suitable for deploying in live-cell conditions. In this study, a comprehensive review of the signal amplification technologies for the detection of nucleic acids is provided. We classify the signal amplification methodologies into enzymatic and non-enzymatic strategies with a primary focus on the methods that enable us to shift away from in vitro detecting to in vivo imaging. Finally, the future challenges and limitations of detection for cellular conditions are discussed.
Samanta, Dipanjan; Widom, Joanne; Borbat, Peter P; Freed, Jack H; Crane, Brian R
2016-12-09
Flagellated bacteria modulate their swimming behavior in response to environmental cues through the CheA/CheY signaling pathway. In addition to responding to external chemicals, bacteria also monitor internal conditions that reflect the availability of oxygen, light, and reducing equivalents, in a process termed "energy taxis." In Escherichia coli, the transmembrane receptor Aer is the primary energy sensor for motility. Genetic and physiological data suggest that Aer monitors the electron transport chain through the redox state of its FAD cofactor. However, direct biochemical data correlating FAD redox chemistry with CheA kinase activity have been lacking. Here, we test this hypothesis via functional reconstitution of Aer into nanodiscs. As purified, Aer contains fully oxidized FAD, which can be chemically reduced to the anionic semiquinone (ASQ). Oxidized Aer activates CheA, whereas ASQ Aer reversibly inhibits CheA. Under these conditions, Aer cannot be further reduced to the hydroquinone, in contrast to the proposed Aer signaling model. Pulse ESR spectroscopy of the ASQ corroborates a potential mechanism for signaling in that the resulting distance between the two flavin-binding PAS (Per-Arnt-Sim) domains implies that they tightly sandwich the signal-transducing HAMP domain in the kinase-off state. Aer appears to follow oligomerization patterns observed for related chemoreceptors, as higher loading of Aer dimers into nanodiscs increases kinase activity. These results provide a new methodological platform to study Aer function along with new mechanistic details into its signal transduction process. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Background noise can enhance cortical auditory evoked potentials under certain conditions
Papesh, Melissa A.; Billings, Curtis J.; Baltzell, Lucas S.
2017-01-01
Objective To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. Methods CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30 dB. The syllable was presented binaurally and monaurally at two presentation rates. Results The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Conclusions Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Significance Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. PMID:25453611
Experimental validation of a structural damage detection method based on marginal Hilbert spectrum
NASA Astrophysics Data System (ADS)
Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh
2017-04-01
Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.
Pattern recognition in volcano seismology - Reducing spectral dimensionality
NASA Astrophysics Data System (ADS)
Unglert, K.; Radic, V.; Jellinek, M.
2015-12-01
Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we evaluate whether a machine learning technique called Self-Organizing Maps (SOMs) can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. This could reduce the dimensions of the spectral space typically analyzed by orders of magnitude, and enable rapid processing and visualization. Preliminary results suggest that the temporal evolution of volcano seismicity at Kilauea Volcano, Hawai`i, can be reduced to as few as 2 spectral components by using a combination of SOMs and cluster analysis. We will further refine our methodology with several datasets from Hawai`i and Alaska, among others, and compare it to other techniques.
New Perspectives on Assessing Amplification Effects
Souza, Pamela E.; Tremblay, Kelly L.
2006-01-01
Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734
Fluid flow measurements by means of vibration monitoring
NASA Astrophysics Data System (ADS)
Campagna, Mauro M.; Dinardo, Giuseppe; Fabbiano, Laura; Vacca, Gaetano
2015-11-01
The achievement of accurate fluid flow measurements is fundamental whenever the control and the monitoring of certain physical quantities governing an industrial process are required. In that case, non-intrusive devices are preferable, but these are often more sophisticated and expensive than those which are more common (such as nozzles, diaphrams, Coriolis flowmeters and so on). In this paper, a novel, non-intrusive, simple and inexpensive methodology is presented to measure the fluid flow rate (in a turbulent regime) whose physical principle is based on the acquisition of transversal vibrational signals induced by the fluid itself onto the pipe walls it is flowing through. Such a principle of operation would permit the use of micro-accelerometers capable of acquiring and transmitting the signals, even by means of wireless technology, to a control room for the monitoring of the process under control. A possible application (whose feasibility will be investigated by the authors in a further study) of this introduced technology is related to the employment of a net of micro-accelerometers to be installed on pipeline networks of aqueducts. This apparatus could lead to the faster and easier detection and location of possible leaks of fluid affecting the pipeline network with more affordable costs. The authors, who have previously proven the linear dependency of the acceleration harmonics amplitude on the flow rate, here discuss an experimental analysis of this functional relation with the variation in the physical properties of the pipe in terms of its diameter and constituent material, to find the eventual limits to the practical application of the measurement methodology.
Methodology for the AutoRegressive Planet Search (ARPS) Project
NASA Astrophysics Data System (ADS)
Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration
2018-01-01
The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.
Response Time Analysis and Test of Protection System Instrument Channels for APR1400 and OPR1000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Chang Jae; Han, Seung; Yun, Jae Hee
2015-07-01
Safety limits are required to maintain the integrity of physical barriers designed to prevent the uncontrolled release of radioactive materials in nuclear power plants. The safety analysis establishes two critical constraints that include an analytical limit in terms of a measured or calculated variable, and a specific time after the analytical limit is reached to begin protective action. Keeping with the nuclear regulations and industry standards, satisfying these two requirements will ensure that the safety limit will not be exceeded during the design basis event, either an anticipated operational occurrence or a postulated accident. Various studies on the setpoint determinationmore » methodology for the safety-related instrumentation have been actively performed to ensure that the requirement of the analytical limit is satisfied. In particular, the protection setpoint methodology for the advanced power reactor 1400 (APP1400) and the optimized power reactor 1000 (OPR1000) has been recently developed to cover both the design basis event and the beyond design basis event. The developed setpoint methodology has also been quantitatively validated using specific computer programs and setpoint calculations. However, the safety of nuclear power plants cannot be fully guaranteed by satisfying the requirement of the analytical limit. In spite of the response time verification requirements of nuclear regulations and industry standards, it is hard to find the studies on the systematically integrated methodology regarding the response time evaluation. In cases of APR1400 and OPR1000, the response time analysis for the plant protection system is partially included in the setpoint calculation and the response time test is separately performed via the specific plant procedure. The test technique has a drawback which is the difficulty to demonstrate completeness of timing test. The analysis technique has also a demerit of resulting in extreme times that not actually possible. Thus, the establishment of the systematic response time evaluation methodology is needed to justify the conformance to the response time requirement used in the safety analysis. This paper proposes the response time evaluation methodology for APR1400 and OPR1000 using the combined analysis and test technique to confirm that the plant protection system can meet the analytical response time assumed in the safety analysis. In addition, the results of the quantitative evaluation performed for APR1400 and OPR1000 are presented in this paper. The proposed response time analysis technique consists of defining the response time requirement, determining the critical signal path for the trip parameter, allocating individual response time to each component on the signal path, and analyzing the total response time for the trip parameter, and demonstrates that the total analyzed response time does not exceed the response time requirement. The proposed response time test technique is composed of defining the response time requirement, determining the critical signal path for the trip parameter, determining the test method for each component on the signal path, performing the response time test, and demonstrates that the total test result does not exceed the response time requirement. The total response time should be tested in a single test that covers from the sensor to the final actuation device on the instrument channel. When the total channel is not tested in a single test, separate tests on groups of components or single components including the total instrument channel shall be combined to verify the total channel response. For APR1400 and OPR1000, the ramp test technique is used for the pressure and differential pressure transmitters and the step function testing technique is applied to the signal processing equipment and final actuation device. As a result, it can be demonstrated that the response time requirement is satisfied by the combined analysis and test technique. Therefore, the proposed methodology in this paper plays a crucial role in guaranteeing the safety of the nuclear power plants systematically satisfying one of two critical requirements from the safety analysis. (authors)« less
High-resolution behavioral mapping of electric fishes in Amazonian habitats.
Madhav, Manu S; Jayakumar, Ravikrishnan P; Demir, Alican; Stamper, Sarah A; Fortune, Eric S; Cowan, Noah J
2018-04-11
The study of animal behavior has been revolutionized by sophisticated methodologies that identify and track individuals in video recordings. Video recording of behavior, however, is challenging for many species and habitats including fishes that live in turbid water. Here we present a methodology for identifying and localizing weakly electric fishes on the centimeter scale with subsecond temporal resolution based solely on the electric signals generated by each individual. These signals are recorded with a grid of electrodes and analyzed using a two-part algorithm that identifies the signals from each individual fish and then estimates the position and orientation of each fish using Bayesian inference. Interestingly, because this system involves eavesdropping on electrocommunication signals, it permits monitoring of complex social and physical interactions in the wild. This approach has potential for large-scale non-invasive monitoring of aquatic habitats in the Amazon basin and other tropical freshwater systems.
Hardware proofs using EHDM and the RSRE verification methodology
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Sjogren, Jon A.
1988-01-01
Examined is a methodology for hardware verification developed by Royal Signals and Radar Establishment (RSRE) in the context of the SRI International's Enhanced Hierarchical Design Methodology (EHDM) specification/verification system. The methodology utilizes a four-level specification hierarchy with the following levels: functional level, finite automata model, block model, and circuit level. The properties of a level are proved as theorems in the level below it. This methodology is applied to a 6-bit counter problem and is critically examined. The specifications are written in EHDM's specification language, Extended Special, and the proofs are improving both the RSRE methodology and the EHDM system.
Fiber-Optic Communication Links Suitable for On-Board Use in Modern Aircraft
NASA Technical Reports Server (NTRS)
Nguyen, Hung; Ngo, Duc; Alam, Mohammad F.; Atiquzzaman, Mohammed; Sluse, James; Slaveski, Filip
2004-01-01
The role of the Advanced Air Transportation Technologies program undertaken at the NASA Glenn Research Centers has been focused mainly on the improvement of air transportation safety, with particular emphasis on air transportation communication systems in on-board aircraft. The conventional solutions for digital optical communications systems specifically designed for local/metro area networks are, unfortunately, not capable of transporting the microwave and millimeter RF signals used in avionics systems. Optical networks capable of transporting RF signals are substantially different from the standard digital optical communications systems. The objective of this paper is to identify a number of different communication link architectures for RF/fiber optic transmission using a single backbone fiber for carrying VHF and UHF RF signals in the aircraft. To support these architectures, two approaches derived from both hybrid RF-optical and all-optical processing methodologies are discussed with single and multiple antennas for explicitly transporting VHF and UHF signals, while the relative merits and demerits of each architecture are also addressed. Furthermore, the experimental results of wavelength division multiplexing (WDM) link architecture from our test-bed platform, configured for aircraft environment to support simultaneous transmission of multiple RF signals over a single optical fiber, exhibit no appreciable signal degradation at wavelengths of both 1330 and 1550 nm, respectively. Our measurements of signal to noise ratio carried out for the transmission of FM and AM analog modulated signals at these wavelengths indicate that WDM is a fiber optic technology which is potentially suitable for avionics applications.
Duarte-Galvan, Carlos; Romero-Troncoso, Rene de J; Torres-Pacheco, Irineo; Guevara-Gonzalez, Ramon G; Fernandez-Jaramillo, Arturo A; Contreras-Medina, Luis M; Carrillo-Serrano, Roberto V; Millan-Almaraz, Jesus R
2014-10-09
Soil drought represents one of the most dangerous stresses for plants. It impacts the yield and quality of crops, and if it remains undetected for a long time, the entire crop could be lost. However, for some plants a certain amount of drought stress improves specific characteristics. In such cases, a device capable of detecting and quantifying the impact of drought stress in plants is desirable. This article focuses on testing if the monitoring of physiological process through a gas exchange methodology provides enough information to detect drought stress conditions in plants. The experiment consists of using a set of smart sensors based on Field Programmable Gate Arrays (FPGAs) to monitor a group of plants under controlled drought conditions. The main objective was to use different digital signal processing techniques such as the Discrete Wavelet Transform (DWT) to explore the response of plant physiological processes to drought. Also, an index-based methodology was utilized to compensate the spatial variation inside the greenhouse. As a result, differences between treatments were determined to be independent of climate variations inside the greenhouse. Finally, after using the DWT as digital filter, results demonstrated that the proposed system is capable to reject high frequency noise and to detect drought conditions.
PREDICTION OF NONLINEAR SPATIAL FUNCTIONALS. (R827257)
Spatial statistical methodology can be useful in the arena of environmental regulation. Some regulatory questions may be addressed by predicting linear functionals of the underlying signal, but other questions may require the prediction of nonlinear functionals of the signal. ...
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Design of feedback control systems for stable plants with saturating actuators
NASA Technical Reports Server (NTRS)
Kapasouris, Petros; Athans, Michael; Stein, Gunter
1988-01-01
A systematic control design methodology is introduced for multi-input/multi-output stable open loop plants with multiple saturations. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way as to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of the methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated in the simulation of an academic example and the simulation of the multivariable longitudinal control of a modified model of the F-8 aircraft.
Amezquita-Sanchez, Juan P; Adeli, Anahita; Adeli, Hojjat
2016-05-15
Mild cognitive impairment (MCI) is a cognitive disorder characterized by memory impairment, greater than expected by age. A new methodology is presented to identify MCI patients during a working memory task using MEG signals. The methodology consists of four steps: In step 1, the complete ensemble empirical mode decomposition (CEEMD) is used to decompose the MEG signal into a set of adaptive sub-bands according to its contained frequency information. In step 2, a nonlinear dynamics measure based on permutation entropy (PE) analysis is employed to analyze the sub-bands and detect features to be used for MCI detection. In step 3, an analysis of variation (ANOVA) is used for feature selection. In step 4, the enhanced probabilistic neural network (EPNN) classifier is applied to the selected features to distinguish between MCI and healthy patients. The usefulness and effectiveness of the proposed methodology are validated using the sensed MEG data obtained experimentally from 18 MCI and 19 control patients. Copyright © 2016 Elsevier B.V. All rights reserved.
Lemonakis, Nikolaos; Skaltsounis, Alexios-Leandros; Tsarbopoulos, Anthony; Gikas, Evagelos
2016-01-15
A multistage optimization of all the parameters affecting detection/response in an LTQ-orbitrap analyzer was performed, using a design of experiments methodology. The signal intensity, a critical issue for mass analysis, was investigated and the optimization process was completed in three successive steps, taking into account the three main regions of an orbitrap, the ion generation, the ion transmission and the ion detection regions. Oleuropein and hydroxytyrosol were selected as the model compounds. Overall, applying this methodology the sensitivity was increased more than 24%, the resolution more than 6.5%, whereas the elapsed scan time was reduced nearly to its half. A high-resolution LTQ Orbitrap Discovery mass spectrometer was used for the determination of the analytes of interest. Thus, oleuropein and hydroxytyrosol were infused via the instruments syringe pump and they were analyzed employing electrospray ionization (ESI) in the negative high-resolution full-scan ion mode. The parameters of the three main regions of the LTQ-orbitrap were independently optimized in terms of maximum sensitivity. In this context, factorial design, response surface model and Plackett-Burman experiments were performed and analysis of variance was carried out to evaluate the validity of the statistical model and to determine the most significant parameters for signal intensity. The optimum MS conditions for each analyte were summarized and the method optimum condition was achieved by maximizing the desirability function. Our observation showed good agreement between the predicted optimum response and the responses collected at the predicted optimum conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Fujita, Hiroto; Kataoka, Yuka; Tobita, Seiji; Kuwahara, Masayasu; Sugimoto, Naoki
2016-07-19
We have developed a novel RNA detection method, termed signal amplification by ternary initiation complexes (SATIC), in which an analyte sample is simply mixed with the relevant reagents and allowed to stand for a short time under isothermal conditions (37 °C). The advantage of the technique is that there is no requirement for (i) heat annealing, (ii) thermal cycling during the reaction, (iii) a reverse transcription step, or (iv) enzymatic or mechanical fragmentation of the target RNA. SATIC involves the formation of a ternary initiation complex between the target RNA, a circular DNA template, and a DNA primer, followed by rolling circle amplification (RCA) to generate multiple copies of G-quadruplex (G4) on a long DNA strand like beads on a string. The G4s can be specifically fluorescence-stained with N(3)-hydroxyethyl thioflavin T (ThT-HE), which emits weakly with single- and double-stranded RNA/DNA but strongly with parallel G4s. An improved dual SATIC system, which involves the formation of two different ternary initiation complexes in the RCA process, exhibited a wide quantitative detection range of 1-5000 pM. Furthermore, this enabled visual observation-based RNA detection, which is more rapid and convenient than conventional isothermal methods, such as reverse transcription-loop-mediated isothermal amplification, signal mediated amplification of RNA technology, and RNA-primed rolling circle amplification. Thus, SATIC methodology may serve as an on-site and real-time measurement technique for transcriptomic biomarkers for various diseases.
Hauben, Manfred; Hung, Eric Y.
2016-01-01
Introduction: There is an interest in methodologies to expeditiously detect credible signals of drug-induced pancreatitis. An example is the reported signal of pancreatitis with rasburicase emerging from a study [the ‘index publication’ (IP)] combining quantitative signal detection findings from a spontaneous reporting system (SRS) and electronic health records (EHRs). The signal was reportedly supported by a clinical review with a case series manuscript in progress. The reported signal is noteworthy, being initially classified as a false-positive finding for the chosen reference standard, but reclassified as a ‘clinically supported’ signal. Objective: This paper has dual objectives: to revisit the signal of rasburicase and acute pancreatitis and extend the original analysis via reexamination of its findings, in light of more contemporary data; and to motivate discussions on key issues in signal detection and evaluation, including recent findings from a major international pharmacovigilance research initiative. Methodology: We used the same methodology as the IP, including the same disproportionality analysis software/dataset for calculating observed to expected reporting frequencies (O/Es), Medical Dictionary for Regulatory Activities Preferred Term, and O/E metric/threshold combination defining a signal of disproportionate reporting. Baseline analysis results prompted supplementary analyses using alternative analytical choices. We performed a comprehensive literature search to identify additional published case reports of rasburicase and pancreatitis. Results: We could not replicate positive findings (e.g. a signal or statistic of disproportionate reporting) from the SRS data using the same algorithm, software, dataset and vendor specified in the IP. The reporting association was statistically highlighted in default and supplemental analysis when more sensitive forms of disproportionality analysis were used. Two of three reports in the FAERS database were assessed as likely duplicate reports. We did not identify any additional reports in the FAERS corresponding to the three cases identified in the IP using EHRs. We did not identify additional published reports of pancreatitis associated with rasburicase. Discussion: Our exercise stimulated interesting discussions of key points in signal detection and evaluation, including causality assessment, signal detection algorithm performance, pharmacovigilance terminology, duplicate reporting, mechanisms for communicating signals, the structure of the FAERs database, and recent results from a major international pharmacovigilance research initiative. PMID:27298720
Method for laser spot welding monitoring
NASA Astrophysics Data System (ADS)
Manassero, Giorgio
1994-09-01
As more powerful solid state laser sources appear on the market, new applications become technically possible and important from the economical point of view. For every process a preliminary optimization phase is necessary. The main parameters, used for a welding application by a high power Nd-YAG laser, are: pulse energy, pulse width, repetition rate and process duration or speed. In this paper an experimental methodology, for the development of an electrooptical laser spot welding monitoring system, is presented. The electromagnetic emission from the molten pool was observed and measured with appropriate sensors. The statistical method `Parameter Design' was used to obtain an accurate analysis of the process parameter that influence process results. A laser station with a solid state laser coupled to an optical fiber (1 mm in diameter) was utilized for the welding tests. The main material used for the experimental plan was zinc coated steel sheet 0.8 mm thick. This material and the related spot welding technique are extensively used in the automotive industry, therefore, the introduction of laser technology in production line will improve the quality of the final product. A correlation, between sensor signals and `through or not through' welds, was assessed. The investigation has furthermore shown the necessity, for the modern laser production systems, to use multisensor heads for process monitoring or control with more advanced signal elaboration procedures.
Are Happy Faces Attractive? The Roles of Early vs. Late Processing
Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.
2015-01-01
Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885
Margaritelis, Nikos V; Cobley, James N; Paschalis, Vassilis; Veskoukis, Aristidis S; Theodorou, Anastasios A; Kyparos, Antonios; Nikolaidis, Michalis G
2016-04-01
The equivocal role of reactive species and redox signaling in exercise responses and adaptations is an example clearly showing the inadequacy of current redox biology research to shed light on fundamental biological processes in vivo. Part of the answer probably relies on the extreme complexity of the in vivo redox biology and the limitations of the currently applied methodological and experimental tools. We propose six fundamental principles that should be considered in future studies to mechanistically link reactive species production to exercise responses or adaptations: 1) identify and quantify the reactive species, 2) determine the potential signaling properties of the reactive species, 3) detect the sources of reactive species, 4) locate the domain modified and verify the (ir)reversibility of post-translational modifications, 5) establish causality between redox and physiological measurements, 6) use selective and targeted antioxidants. Fulfilling these principles requires an idealized human experimental setting, which is certainly a utopia. Thus, researchers should choose to satisfy those principles, which, based on scientific evidence, are most critical for their specific research question. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Di Anibal, Carolina V.; Marsal, Lluís F.; Callao, M. Pilar; Ruisánchez, Itziar
2012-02-01
Raman spectroscopy combined with multivariate analysis was evaluated as a tool for detecting Sudan I dye in culinary spices. Three Raman modalities were studied: normal Raman, FT-Raman and SERS. The results show that SERS is the most appropriate modality capable of providing a proper Raman signal when a complex matrix is analyzed. To get rid of the spectral noise and background, Savitzky-Golay smoothing with polynomial baseline correction and wavelet transform were applied. Finally, to check whether unadulterated samples can be differentiated from samples adulterated with Sudan I dye, an exploratory analysis such as principal component analysis (PCA) was applied to raw data and data processed with the two mentioned strategies. The results obtained by PCA show that Raman spectra need to be properly treated if useful information is to be obtained and both spectra treatments are appropriate for processing the Raman signal. The proposed methodology shows that SERS combined with appropriate spectra treatment can be used as a practical screening tool to distinguish samples suspicious to be adulterated with Sudan I dye.
Wavelet-based characterization of gait signal for neurological abnormalities.
Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S
2015-02-01
Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
Global Infrasound Association Based on Probabilistic Clutter Categorization
NASA Astrophysics Data System (ADS)
Arora, Nimar; Mialle, Pierrick
2016-04-01
The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. An objective of this study is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. Indeed, a considerable number of signal detections are due to local clutter sources such as microbaroms, waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NETVISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] Infrasound categorization Towards a statistics based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013
PSGMiner: A modular software for polysomnographic analysis.
Umut, İlhan
2016-06-01
Sleep disorders affect a great percentage of the population. The diagnosis of these disorders is usually made by polysomnography. This paper details the development of new software to carry out feature extraction in order to perform robust analysis and classification of sleep events using polysomnographic data. The software, called PSGMiner, is a tool, which visualizes, processes and classifies bioelectrical data. The purpose of this program is to provide researchers with a platform with which to test new hypotheses by creating tests to check for correlations that are not available in commercially available software. The software is freely available under the GPL3 License. PSGMiner is composed of a number of diverse modules such as feature extraction, annotation, and machine learning modules, all of which are accessible from the main module. Using the software, it is possible to extract features of polysomnography using digital signal processing and statistical methods and to perform different analyses. The features can be classified through the use of five classification algorithms. PSGMiner offers an architecture designed for integrating new methods. Automatic scoring, which is available in almost all commercial PSG software, is not inherently available in this program, though it can be implemented by two different methodologies (machine learning and algorithms). While similar software focuses on a certain signal or event composed of a small number of modules with no expansion possibility, the software introduced here can handle all polysomnographic signals and events. The software simplifies the processing of polysomnographic signals for researchers and physicians that are not experts in computer programming. It can find correlations between different events which could help predict an oncoming event such as sleep apnea. The software could also be used for educational purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
The time course of individual face recognition: A pattern analysis of ERP signals.
Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian
2016-05-15
An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2006-11-01
A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.
Rundo, Francesco; Ortis, Alessandro
2018-01-01
Physiological signals are widely used to perform medical assessment for monitoring an extensive range of pathologies, usually related to cardio-vascular diseases. Among these, both PhotoPlethysmoGraphy (PPG) and Electrocardiography (ECG) signals are those more employed. PPG signals are an emerging non-invasive measurement technique used to study blood volume pulsations through the detection and analysis of the back-scattered optical radiation coming from the skin. ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. In the present paper we propose a physiological ECG/PPG “combo” pipeline using an innovative bio-inspired nonlinear system based on a reaction-diffusion mathematical model, implemented by means of the Cellular Neural Network (CNN) methodology, to filter PPG signal by assigning a recognition score to the waveforms in the time series. The resulting “clean” PPG signal exempts from distortion and artifacts is used to validate for diagnostic purpose an EGC signal simultaneously detected for a same patient. The multisite combo PPG-ECG system proposed in this work overpasses the limitations of the state of the art in this field providing a reliable system for assessing the above-mentioned physiological parameters and their monitoring over time for robust medical assessment. The proposed system has been validated and the results confirmed the robustness of the proposed approach. PMID:29385774
Rundo, Francesco; Conoci, Sabrina; Ortis, Alessandro; Battiato, Sebastiano
2018-01-30
Physiological signals are widely used to perform medical assessment for monitoring an extensive range of pathologies, usually related to cardio-vascular diseases. Among these, both PhotoPlethysmoGraphy (PPG) and Electrocardiography (ECG) signals are those more employed. PPG signals are an emerging non-invasive measurement technique used to study blood volume pulsations through the detection and analysis of the back-scattered optical radiation coming from the skin. ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. In the present paper we propose a physiological ECG/PPG "combo" pipeline using an innovative bio-inspired nonlinear system based on a reaction-diffusion mathematical model, implemented by means of the Cellular Neural Network (CNN) methodology, to filter PPG signal by assigning a recognition score to the waveforms in the time series. The resulting "clean" PPG signal exempts from distortion and artifacts is used to validate for diagnostic purpose an EGC signal simultaneously detected for a same patient. The multisite combo PPG-ECG system proposed in this work overpasses the limitations of the state of the art in this field providing a reliable system for assessing the above-mentioned physiological parameters and their monitoring over time for robust medical assessment. The proposed system has been validated and the results confirmed the robustness of the proposed approach.
1992-09-21
describt systeiii- atic methodologies for selecting nonlitinr transformiations for blind equal- ization algorithins ,and thus new types of culnulants...nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. ’We describe methodologies for selecting nonlinear...which do riot require any known training sequence during the startup period. 4 The paper describes systematic methodologies for selecting the
Seeded Fault Bearing Experiments: Methodology and Data Acquisition
2011-06-01
electronics piezoelectric ( IEPE ) transducer. Constant current biased transducers require AC coupling for the output signal. The ICP-Type Signal...the outer race I/O input/output IEPE integral electronics piezoelectric LCD liquid crystal display P&D Prognostics and Diagnostics RMS root
Local Variation of Hashtag Spike Trains and Popularity in Twitter
Sanlı, Ceyda; Lambiotte, Renaud
2015-01-01
We draw a parallel between hashtag time series and neuron spike trains. In each case, the process presents complex dynamic patterns including temporal correlations, burstiness, and all other types of nonstationarity. We propose the adoption of the so-called local variation in order to uncover salient dynamical properties, while properly detrending for the time-dependent features of a signal. The methodology is tested on both real and randomized hashtag spike trains, and identifies that popular hashtags present regular and so less bursty behavior, suggesting its potential use for predicting online popularity in social media. PMID:26161650
A survey of decision tree classifier methodology
NASA Technical Reports Server (NTRS)
Safavian, S. R.; Landgrebe, David
1991-01-01
Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
A survey of decision tree classifier methodology
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Decision Tree Classifiers (DTC's) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps, the most important feature of DTC's is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issue. After considering potential advantages of DTC's over single stage classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.
NASA Technical Reports Server (NTRS)
Kapasouris, Petros
1988-01-01
A systematic control design methodology is introduced for multi-input/multi-output systems with multiple saturations. The methodology can be applied to stable and unstable open loop plants with magnitude and/or rate control saturations and to systems in which state limitations are desired. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of this methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated by numerous simulations, including the multivariable longitudinal control of modified models of the F-8 (stable) and F-16 (unstable) aircraft.
NASA Astrophysics Data System (ADS)
Mozaffarilegha, Marjan; Esteki, Ali; Ahadi, Mohsen; Nazeri, Ahmadreza
The speech-evoked auditory brainstem response (sABR) shows how complex sounds such as speech and music are processed in the auditory system. Speech-ABR could be used to evaluate particular impairments and improvements in auditory processing system. Many researchers used linear approaches for characterizing different components of sABR signal, whereas nonlinear techniques are not applied so commonly. The primary aim of the present study is to examine the underlying dynamics of normal sABR signals. The secondary goal is to evaluate whether some chaotic features exist in this signal. We have presented a methodology for determining various components of sABR signals, by performing Ensemble Empirical Mode Decomposition (EEMD) to get the intrinsic mode functions (IMFs). Then, composite multiscale entropy (CMSE), the largest Lyapunov exponent (LLE) and deterministic nonlinear prediction are computed for each extracted IMF. EEMD decomposes sABR signal into five modes and a residue. The CMSE results of sABR signals obtained from 40 healthy people showed that 1st, and 2nd IMFs were similar to the white noise, IMF-3 with synthetic chaotic time series and 4th, and 5th IMFs with sine waveform. LLE analysis showed positive values for 3rd IMFs. Moreover, 1st, and 2nd IMFs showed overlaps with surrogate data and 3rd, 4th and 5th IMFs showed no overlap with corresponding surrogate data. Results showed the presence of noisy, chaotic and deterministic components in the signal which respectively corresponded to 1st, and 2nd IMFs, IMF-3, and 4th and 5th IMFs. While these findings provide supportive evidence of the chaos conjecture for the 3rd IMF, they do not confirm any such claims. However, they provide a first step towards an understanding of nonlinear behavior of auditory system dynamics in brainstem level.
SETI - The search for extraterrestrial intelligence - Plans and rationale
NASA Technical Reports Server (NTRS)
Wolfe, J. H.; Billingham, J.; Edelson, R. E.; Crow, R. B.; Gulkis, S.; Olsen, E. T.; Oliver, B. M.; Peterson, A. M.
1981-01-01
The methodology and instrumentation of a 10 yr search for extraterrestrial intelligence (SETI) program by NASA, comprising 5 yr for instrumentation development and 5 yr for observations, is described. A full sky survey in two polarizations between 1.2 and 10 GHz with resolution binwidths down to 32 Hz, and a two polarization can between 1.2-3 GHz with resolution binwidths down to 1 Hz of 700 nearby solar type stars within 20 light years of earth will extend the sensitivity of previous surveys by 300 times and cover 20,000 times more frequency space. EM signals are perceived as the only means for detecting life outside the solar system, and the SETI effort is driven by the empirical experience that once a physical process has been observed to occur, its occurrence elsewhere is assured. Further discussion is given of the history of searches for life in the Universe, the SETI search strategy, instrumentation, and signal identification.
MNE software for processing MEG and EEG data
Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808
Ishii, Jun; Fukuda, Nobuo; Tanaka, Tsutomu; Ogino, Chiaki; Kondo, Akihiko
2010-05-01
For elucidating protein–protein interactions, many methodologies have been developed during the past two decades. For investigation of interactions inside cells under physiological conditions, yeast is an attractive organism with which to quickly screen for hopeful candidates using versatile genetic technologies, and various types of approaches are now available.Among them, a variety of unique systems using the guanine nucleotide-binding protein (G-protein) signaling pathway in yeast have been established to investigate the interactions of proteins for biological study and pharmaceutical research. G-proteins involved in various cellular processes are mainly divided into two groups: small monomeric G-proteins,and heterotrimeric G-proteins. In this minireview, we summarize the basic principles and applications of yeast-based screening systems, using these two types of G-protein, which are typically used for elucidating biological protein interactions but are differentiated from traditional yeast two-hybrid systems.
Comparison of two drug safety signals in a pharmacovigilance data mining framework.
Tubert-Bitter, Pascale; Bégaud, Bernard; Ahmed, Ismaïl
2016-04-01
Since adverse drug reactions are a major public health concern, early detection of drug safety signals has become a top priority for regulatory agencies and the pharmaceutical industry. Quantitative methods for analyzing spontaneous reporting material recorded in pharmacovigilance databases through data mining have been proposed in the last decades and are increasingly used to flag potential safety problems. While automated data mining is motivated by the usually huge size of pharmacovigilance databases, it does not systematically produce relevant alerts. Moreover, each detected signal requires appropriate assessment that may involve investigation of the whole therapeutic class. The goal of this article is to provide a methodology for comparing two detected signals. It is nested within the automated surveillance framework as (1) no extra information is required and (2) no simple inference on the actual risks can be extrapolated from spontaneous reporting data. We designed our methodology on the basis of two classical methods used for automated signal detection: the Bayesian Gamma Poisson Shrinker and the frequentist Proportional Reporting Ratio. A simulation study was conducted to assess the performances of both proposed methods. The latter were used to compare cardiovascular signals for two HIV treatments from the French pharmacovigilance database. © The Author(s) 2012.
Vargas, E; Ruiz, M A; Campuzano, S; Reviejo, A J; Pingarrón, J M
2016-03-31
A non-destructive, rapid and simple to use sensing method for direct determination of glucose in non-processed fruits is described. The strategy involved on-line microdialysis sampling coupled with a continuous flow system with amperometric detection at an enzymatic biosensor. Apart from direct determination of glucose in fruit juices and blended fruits, this work describes for the first time the successful application of an enzymatic biosensor-based electrochemical approach to the non-invasive determination of glucose in raw fruits. The methodology correlates, through previous calibration set-up, the amperometric signal generated from glucose in non-processed fruits with its content in % (w/w). The comparison of the obtained results using the proposed approach in different fruits with those provided by other method involving the same commercial biosensor as amperometric detector in stirred solutions pointed out that there were no significant differences. Moreover, in comparison with other available methodologies, this microdialysis-coupled continuous flow system amperometric biosensor-based procedure features straightforward sample preparation, low cost, reduced assay time (sampling rate of 7 h(-1)) and ease of automation. Copyright © 2016 Elsevier B.V. All rights reserved.
Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors
Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus
2014-01-01
Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281
Empirical mode decomposition and neural networks on FPGA for fault diagnosis in induction motors.
Camarena-Martinez, David; Valtierra-Rodriguez, Martin; Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus
2014-01-01
Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.
Inferring Molecular Processes Heterogeneity from Transcriptional Data.
Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.
Inferring Molecular Processes Heterogeneity from Transcriptional Data
Wronowska, Weronika; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs. PMID:29362714
Computational Electrocardiography: Revisiting Holter ECG Monitoring.
Deserno, Thomas M; Marx, Nikolaus
2016-08-05
Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.
Intrasulcal Electrocorticography in Macaque Monkeys with Minimally Invasive Neurosurgical Protocols
Matsuo, Takeshi; Kawasaki, Keisuke; Osada, Takahiro; Sawahata, Hirohito; Suzuki, Takafumi; Shibata, Masahiro; Miyakawa, Naohisa; Nakahara, Kiyoshi; Iijima, Atsuhiko; Sato, Noboru; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao
2011-01-01
Electrocorticography (ECoG), multichannel brain-surface recording and stimulation with probe electrode arrays, has become a potent methodology not only for clinical neurosurgery but also for basic neuroscience using animal models. The highly evolved primate's brain has deep cerebral sulci, and both gyral and intrasulcal cortical regions have been implicated in important functional processes. However, direct experimental access is typically limited to gyral regions, since placing probes into sulci is difficult without damaging the surrounding tissues. Here we describe a novel methodology for intrasulcal ECoG in macaque monkeys. We designed and fabricated ultra-thin flexible probes for macaques with micro-electro-mechanical systems technology. We developed minimally invasive operative protocols to implant the probes by introducing cutting-edge devices for human neurosurgery. To evaluate the feasibility of intrasulcal ECoG, we conducted electrophysiological recording and stimulation experiments. First, we inserted parts of the Parylene-C-based probe into the superior temporal sulcus to compare visually evoked ECoG responses from the ventral bank of the sulcus with those from the surface of the inferior temporal cortex. Analyses of power spectral density and signal-to-noise ratio revealed that the quality of the ECoG signal was comparable inside and outside of the sulcus. Histological examination revealed no obvious physical damage in the implanted areas. Second, we placed a modified silicone ECoG probe into the central sulcus and also on the surface of the precentral gyrus for stimulation. Thresholds for muscle twitching were significantly lower during intrasulcal stimulation compared to gyral stimulation. These results demonstrate the feasibility of intrasulcal ECoG in macaques. The novel methodology proposed here opens up a new frontier in neuroscience research, enabling the direct measurement and manipulation of electrical activity in the whole brain. PMID:21647392
Smart concrete slabs with embedded tubular PZT transducers for damage detection
NASA Astrophysics Data System (ADS)
Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing
2018-02-01
The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.
Rate change detection of frequency modulated signals: developmental trends.
Cohen-Mimran, Ravit; Sapir, Shimon
2011-08-26
The aim of this study was to examine developmental trends in rate change detection of auditory rhythmic signals (repetitive sinusoidally frequency modulated tones). Two groups of children (9-10 years old and 11-12 years old) and one group of young adults performed a rate change detection (RCD) task using three types of stimuli. The rate of stimulus modulation was either constant (CR), raised by 1 Hz in the middle of the stimulus (RR1) or raised by 2 Hz in the middle of the stimulus (RR2). Performance on the RCD task significantly improved with age. Also, the different stimuli showed different developmental trajectories. When the RR2 stimulus was used, results showed adult-like performance by the age of 10 years but when the RR1 stimulus was used performance continued to improve beyond 12 years of age. Rate change detection of repetitive sinusoidally frequency modulated tones show protracted development beyond the age of 12 years. Given evidence for abnormal processing of auditory rhythmic signals in neurodevelopmental conditions, such as dyslexia, the present methodology might help delineate the nature of these conditions.
Scherer, Klaus R.; Schuller, Björn W.
2018-01-01
In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions. PMID:29293572
Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W
2018-01-01
In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.
Intensity-based masking: A tool to improve functional connectivity results of resting-state fMRI.
Peer, Michael; Abboud, Sami; Hertz, Uri; Amedi, Amir; Arzy, Shahar
2016-07-01
Seed-based functional connectivity (FC) of resting-state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices. BOLD signal recorded at these regions suffers from dropout due to susceptibility artifacts, resulting in an attenuated signal with reduced signal-to-noise ratio in as many as 10% of cortical voxels. Nevertheless, signal attenuation is largely overlooked during FC analysis. Here we first demonstrate that signal attenuation can significantly influence FC measures by introducing false functional correlations and diminishing existing correlations between brain regions. We then propose a method for the detection and removal of the attenuated signal ("intensity-based masking") by fitting a Gaussian-based model to the signal intensity distribution and calculating an intensity threshold tailored per subject. Finally, we apply our method on real-world data, showing that it diminishes false correlations caused by signal dropout, and significantly improves the ability to detect functional networks in single subjects. Furthermore, we show that our method increases inter-subject similarity in FC, enabling reliable distinction of different functional networks. We propose to include the intensity-based masking method as a common practice in the pre-processing of seed-based functional connectivity analysis, and provide software tools for the computation of intensity-based masks on fMRI data. Hum Brain Mapp 37:2407-2418, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Provost, F.; Malet, J. P.; Hibert, C.; Doubre, C.
2017-12-01
The Super-Sauze landslide is a clay-rich landslide located the Southern French Alps. The landslide exhibits a complex pattern of deformation: a large number of rockfalls are observed in the 100 m height main scarp while the deformation of the upper part of the accumulated material is mainly affected by material shearing along stable in-situ crests. Several fissures are locally observed. The shallowest layer of the accumulated material tends to behave in a brittle manner but may undergo fluidization and/or rapid acceleration. Previous studies have demonstrated the presence of a rich endogenous micro-seismicity associated to the deformation of the landslide. However, the lack of long-term seismic records and suitable processing chains prevented a full interpretation of the links between the external forcings, the deformation and the recorded seismic signals. Since 2013, two permanent seismic arrays are installed in the upper part of the landslide. We here present the methodology adopted to process this dataset. The processing chain consists of a set of automated methods for automatic and robust detection, classification and location of the recorded seismicity. Thousands of events are detected and further automatically classified. The classification method is based on the description of the signal through attributes (e.g. waveform, spectral content properties). These attributes are used as inputs to classify the signal using a Random Forest machine-learning algorithm in four classes: endogenous micro-quakes, rockfalls, regional earthquakes and natural/anthropogenic noises. The endogenous landslide sources (i.e. micro-quake and rockfall) are further located. The location method is adapted to the type of event. The micro-quakes are located with a 3D velocity model derived from a seismic tomography campaign and an optimization of the first arrival picking with the inter-trace correlation of the P-wave arrivals. The rockfalls are located by optimizing the inter-trace correlation of the whole signal. We analyze the temporal relationships of the endogenous seismic events with rainfall and landslide displacements. Sub-families of landslide micro-quakes are also identified and an interpretation of their source mechanism is proposed from their signal properties and spatial location.
Instantaneous Wavenumber Estimation for Damage Quantification in Layered Plate Structures
NASA Technical Reports Server (NTRS)
Mesnil, Olivier; Leckey, Cara A. C.; Ruzzene, Massimo
2014-01-01
This paper illustrates the application of instantaneous and local wavenumber damage quantification techniques for high frequency guided wave interrogation. The proposed methodologies can be considered as first steps towards a hybrid structural health monitoring/ nondestructive evaluation (SHM/NDE) approach for damage assessment in composites. The challenges and opportunities related to the considered type of interrogation and signal processing are explored through the analysis of numerical data obtained via EFIT simulations of damage in CRFP plates. Realistic damage configurations are modeled from x-ray CT scan data of plates subjected to actual impacts, in order to accurately predict wave-damage interactions in terms of scattering and mode conversions. Simulation data is utilized to enhance the information provided by instantaneous and local wavenumbers and mitigate the complexity related to the multi-modal content of the plate response. Signal processing strategies considered for this purpose include modal decoupling through filtering in the frequency/wavenumber domain, the combination of displacement components, and the exploitation of polarization information for the various modes as evaluated through the dispersion analysis of the considered laminate lay-up sequence. The results presented assess the effectiveness of the proposed wavefield processing techniques as a hybrid SHM/NDE technique for damage detection and quantification in composite, plate-like structures.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2015-09-01
An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
ESARR: enhanced situational awareness via road sign recognition
NASA Astrophysics Data System (ADS)
Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.
2010-04-01
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.
Numerical simulation of electron beam welding with beam oscillations
NASA Astrophysics Data System (ADS)
Trushnikov, D. N.; Permyakov, G. L.
2017-02-01
This research examines the process of electron-beam welding in a keyhole mode with the use of beam oscillations. We study the impact of various beam oscillations and their parameters on the shape of the keyhole, the flow of heat and mass transfer processes and weld parameters to develop methodological recommendations. A numerical three-dimensional mathematical model of electron beam welding is presented. The model was developed on the basis of a heat conduction equation and a Navier-Stokes equation taking into account phase transitions at the interface of a solid and liquid phase and thermocapillary convection (Marangoni effect). The shape of the keyhole is determined based on experimental data on the parameters of the secondary signal by using the method of a synchronous accumulation. Calculations of thermal and hydrodynamic processes were carried out based on a computer cluster, using a simulation package COMSOL Multiphysics.
NASA Astrophysics Data System (ADS)
Attention is given to aspects of quality assurance methodologies in development life cycles, optical intercity transmission systems, multiaccess protocols, system and technology aspects in the case of regional/domestic satellites, advances in SSB-AM radio transmission over terrestrial and satellite network, and development environments for telecommunications systems. Other subjects studied are concerned with business communication networks for voice and data, VLSI in local network and communication protocol, product evaluation and support, an update regarding Videotex, topics in communication theory, topics in radio propagation, a status report regarding societal effects of technology in the workplace, digital image processing, and adaptive signal processing for communications. The management of the reliability function in the development process is considered along with Giga-bit technologies for long distance large capacity optical transmission equipment. The application of gallium arsenide analog and digital integrated circuits for high-speed fiber optical communications, and a simple algorithm for image data coding.
The emotion seen in a face can be a methodological artifact: The process of elimination hypothesis.
DiGirolamo, Marissa A; Russell, James A
2017-04-01
The claim that certain facial expressions signal certain specific emotions has been supported by high observer agreement in labeling the emotion predicted for that expression. Our hypothesis was that, with a method common to the field, high observer agreement can be achieved through a process of elimination: As participants move from trial to trial and they encounter a type of expression not previously encountered in the experiment, they tend to eliminate labels they have already associated with expressions seen on previous trials; they then select among labels not previously used. Seven experiments (total N = 1,068) here showed that the amount of agreement can be altered through a process of elimination. One facial expression not previously theorized to signal any emotion was consensually labeled as disgusted (76%), annoyed (85%), playful (89%), and mischievous (96%). Three quite different facial expressions were labeled nonplussed (82%, 93%, and 82%). A prototypical sad expression was labeled disgusted (55%), and a prototypical fear expression was labeled surprised (55%). A facial expression was labeled with a made-up word ( tolen ; 53%). Similar results were obtained both in a context focused on demonstrating a process of elimination and in one similar to a commonly used method, with 4 target expressions embedded with other expressions in 24 randomly ordered trials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.
2006-03-01
Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.
Caging and Photoactivation in Single-Molecule Förster Resonance Energy Transfer Experiments
2017-01-01
Caged organic fluorophores are established tools for localization-based super-resolution imaging. Their use relies on reversible deactivation of standard organic fluorophores by chemical reduction or commercially available caged dyes with ON switching of the fluorescent signal by ultraviolet (UV) light. Here, we establish caging of cyanine fluorophores and caged rhodamine dyes, i.e., chemical deactivation of fluorescence, for single-molecule Förster resonance energy transfer (smFRET) experiments with freely diffusing molecules. They allow temporal separation and sorting of multiple intramolecular donor–acceptor pairs during solution-based smFRET. We use this “caged FRET” methodology for the study of complex biochemical species such as multisubunit proteins or nucleic acids containing more than two fluorescent labels. Proof-of-principle experiments and a characterization of the uncaging process in the confocal volume are presented. These reveal that chemical caging and UV reactivation allow temporal uncoupling of convoluted fluorescence signals from, e.g., multiple spectrally similar donor or acceptor molecules on nucleic acids. We also use caging without UV reactivation to remove unwanted overlabeled species in experiments with the homotrimeric membrane transporter BetP. We finally outline further possible applications of the caged FRET methodology, such as the study of weak biochemical interactions, which are otherwise impossible with diffusion-based smFRET techniques because of the required low concentrations of fluorescently labeled biomolecules. PMID:28362086
EEG-Informed fMRI: A Review of Data Analysis Methods
Abreu, Rodolfo; Leal, Alberto; Figueiredo, Patrícia
2018-01-01
The simultaneous acquisition of electroencephalography (EEG) with functional magnetic resonance imaging (fMRI) is a very promising non-invasive technique for the study of human brain function. Despite continuous improvements, it remains a challenging technique, and a standard methodology for data analysis is yet to be established. Here we review the methodologies that are currently available to address the challenges at each step of the data analysis pipeline. We start by surveying methods for pre-processing both EEG and fMRI data. On the EEG side, we focus on the correction for several MR-induced artifacts, particularly the gradient and pulse artifacts, as well as other sources of EEG artifacts. On the fMRI side, we consider image artifacts induced by the presence of EEG hardware inside the MR scanner, and the contamination of the fMRI signal by physiological noise of non-neuronal origin, including a review of several approaches to model and remove it. We then provide an overview of the approaches specifically employed for the integration of EEG and fMRI when using EEG to predict the blood oxygenation level dependent (BOLD) fMRI signal, the so-called EEG-informed fMRI integration strategy, the most commonly used strategy in EEG-fMRI research. Finally, we systematically review methods used for the extraction of EEG features reflecting neuronal phenomena of interest. PMID:29467634
NASA Astrophysics Data System (ADS)
Paziewski, Jacek; Sieradzki, Rafal; Baryla, Radoslaw
2018-03-01
This paper provides the methodology and performance assessment of multi-GNSS signal processing for the detection of small-scale high-rate dynamic displacements. For this purpose, we used methods of relative (RTK) and absolute positioning (PPP), and a novel direct signal processing approach. The first two methods are recognized as providing accurate information on position in many navigation and surveying applications. The latter is an innovative method for dynamic displacement determination with the use of GNSS phase signal processing. This method is based on the developed functional model with parametrized epoch-wise topocentric relative coordinates derived from filtered GNSS observations. Current regular kinematic PPP positioning, as well as medium/long range RTK, may not offer coordinate estimates with subcentimeter precision. Thus, extended processing strategies of absolute and relative GNSS positioning have been developed and applied for displacement detection. The study also aimed to comparatively analyze the developed methods as well as to analyze the impact of combined GPS and BDS processing and the dependence of the results of the relative methods on the baseline length. All the methods were implemented with in-house developed software allowing for high-rate precise GNSS positioning and signal processing. The phase and pseudorange observations collected with a rate of 50 Hz during the field test served as the experiment’s data set. The displacements at the rover station were triggered in the horizontal plane using a device which was designed and constructed to ensure a periodic motion of GNSS antenna with an amplitude of ~3 cm and a frequency of ~4.5 Hz. Finally, a medium range RTK, PPP, and direct phase observation processing method demonstrated the capability of providing reliable and consistent results with the precision of the determined dynamic displacements at the millimeter level. Specifically, the research shows that the standard deviation of the displacement residuals obtained as the difference between a benchmark-ultra-short baseline RTK solution and selected scenarios ranged between 1.1 and 3.4 mm. At the same time, the differences in the mean amplitude of the oscillations derived from the established scenarios did not exceed 1.3 mm, whereas the frequency of the motion detected with the use of Fourier transformation was the same.
An automated approach towards detecting complex behaviours in deep brain oscillations.
Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi
2014-03-15
Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
Probing heterotrimeric G protein activation: applications to biased ligands
Denis, Colette; Saulière, Aude; Galandrin, Ségolène; Sénard, Jean-Michel; Galés, Céline
2012-01-01
Cell surface G protein-coupled receptors (GPCRs) drive numerous signaling pathways involved in the regulation of a broad range of physiologic processes. Today, they represent the largest target for modern drugs development with potential application in all clinical fields. Recently, the concept of “ligand-directed trafficking” has led to a conceptual revolution in pharmacological theory, thus opening new avenues for drug discovery. Accordingly, GPCRs do not function as simple on-off switch but rather as filters capable of selecting activation of specific signals and thus generating textured responses to ligands, a phenomenon often referred to as ligand-biased signaling. Also, one challenging task today remains optimization of pharmacological assays with increased sensitivity so to better appreciate the inherent texture of ligand responses. However, considering that a single receptor has pleiotropic signalling properties and that each signal can crosstalk at different levels, biased activity remains thus difficult to evaluate. One strategy to overcome these limitations would be examining the initial steps following receptor activation. Even if some G protein-independent functions have been recently described, heterotrimeric G protein activation remains a general hallmark for all GPCRs families and the first cellular event subsequent to agonist binding to the receptor. Herein, we review the different methodologies classically used or recently developed to monitor G protein activation and discuss them in the context of G protein biased -ligands. PMID:22229559
Stability and performance analysis of a jump linear control system subject to digital upsets
NASA Astrophysics Data System (ADS)
Wang, Rui; Sun, Hui; Ma, Zhen-Yang
2015-04-01
This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).
NASA Astrophysics Data System (ADS)
Benkrid, K.; Belkacemi, S.; Sukhsawas, S.
2005-06-01
This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.
Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek; ...
2017-03-27
Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek
Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less
Cloud-scale genomic signals processing classification analysis for gene expression microarray data.
Harvey, Benjamin; Soo-Yeon Ji
2014-01-01
As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring inference though analysis of DNA/mRNA sequence data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological inference by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale classification analysis of microarray data using Wavelet thresholding in a Cloud environment to identify significantly expressed features. This paper proposes a novel methodology that uses Wavelet based Denoising to initialize a threshold for determination of significantly expressed genes for classification. Additionally, this research was implemented and encompassed within cloud-based distributed processing environment. The utilization of Cloud computing and Wavelet thresholding was used for the classification 14 tumor classes from the Global Cancer Map (GCM). The results proved to be more accurate than using a predefined p-value for differential expression classification. This novel methodology analyzed Wavelet based threshold features of gene expression in a Cloud environment, furthermore classifying the expression of samples by analyzing gene patterns, which inform us of biological processes. Moreover, enabling researchers to face the present and forthcoming challenges that may arise in the analysis of data in functional genomics of large microarray datasets.
DOT National Transportation Integrated Search
2016-06-01
Highway-rail grade crossings (HRGCs) and the intersections in their proximity are areas where potential problems in terms of safety and efficiency often arise if only simple or outdated treatments, such as normal signal timing or passive railroad war...
Methodology Investigation Automatic Magnetic Recording Borescope.
1986-01-01
or other brushless signal coupling devices to the extent possible and feasible to reduce or eliminate the need for slip ring and brush type signal...the inspection head, is used to magnetically couple the necessary energy across the rotary interface. Because there is (1) an appreciable air gap in...were written. (2) As required by the contract, the signal conditioners in the MB employ automatic gain control to compensate for the changes in
Information Theoretic Extraction of EEG Features for Monitoring Subject Attention
NASA Technical Reports Server (NTRS)
Principe, Jose C.
2000-01-01
The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.
Coherency of seismic noise, Green functions and site effects
NASA Astrophysics Data System (ADS)
Prieto, G. A.; Beroza, G. C.
2007-12-01
The newly rediscovered methodology of cross correlating seismic noise (or seismic coda) to retrieve the Green function takes advantage of the coherency of the signals across a set of stations. Only coherent signals are expected to emerge after stacking over a long enough time. Cross-correlation has a significant disadvantage for this purpose, in that the Green function recovered is convolved with the source-time function of the noise source. For seismic waves, this can mean that the microseism peak dominates the signal. We show how the use of the transfer function between sensors provides a better resolved Green function (after inverse Fourier transform), because the deconvolution process removes the effect of the noise source-time function. In addition, we compute the coherence of the seismic noise as a function of frequency and distance, providing information about the effective frequency band over which Green function retrieval is possible. The coherence may also be used in resolution analysis for time reversal as a constraint on the de-coherence length (the distance between sensors over which the signals become uncorrelated). We use the information from the transfer function and the coherence to examine wave propagation effects (attenuation and site effects) for closely spaced stations compared to a reference station.
Pedroza, Mesias; Schneider, Daniel J.; Karmouty-Quintana, Harry; Coote, Julie; Shaw, Stevan; Corrigan, Rebecca; Molina, Jose G.; Alcorn, Joseph L.; Galas, David; Gelinas, Richard; Blackburn, Michael R.
2011-01-01
Background Chronic lung diseases are the third leading cause of death in the United States due in part to an incomplete understanding of pathways that govern the progressive tissue remodeling that occurs in these disorders. Adenosine is elevated in the lungs of animal models and humans with chronic lung disease where it promotes air-space destruction and fibrosis. Adenosine signaling increases the production of the pro-fibrotic cytokine interleukin-6 (IL-6). Based on these observations, we hypothesized that IL-6 signaling contributes to tissue destruction and remodeling in a model of chronic lung disease where adenosine levels are elevated. Methodology/Principal Findings We tested this hypothesis by neutralizing or genetically removing IL-6 in adenosine deaminase (ADA)-deficient mice that develop adenosine dependent pulmonary inflammation and remodeling. Results demonstrated that both pharmacologic blockade and genetic removal of IL-6 attenuated pulmonary inflammation, remodeling and fibrosis in this model. The pursuit of mechanisms involved revealed adenosine and IL-6 dependent activation of STAT-3 in airway epithelial cells. Conclusions/Significance These findings demonstrate that adenosine enhances IL-6 signaling pathways to promote aspects of chronic lung disease. This suggests that blocking IL-6 signaling during chronic stages of disease may provide benefit in halting remodeling processes such as fibrosis and air-space destruction. PMID:21799929
Spacewire on Earth orbiting scatterometers
NASA Technical Reports Server (NTRS)
Bachmann, Alex; Lang, Minh; Lux, James; Steffke, Richard
2002-01-01
The need for a high speed, reliable and easy to implement communication link has led to the development of a space flight oriented version of IEEE 1355 called SpaceWire. SpaceWire is based on high-speed (200 Mbps) serial point-to-point links using Low Voltage Differential Signaling (LVDS). SpaceWIre has provisions for routing messages between a large network of processors, using wormhole routing for low overhead and latency. {additionally, there are available space qualified hybrids, which provide the Link layer to the user's bus}. A test bed of multiple digital signal processor breadboards, demonstrating the ability to meet signal processing requirements for an orbiting scatterometer has been implemented using three Astrium MCM-DSPs, each breadboard consists of a Multi Chip Module (MCM) that combines a space qualified Digital Signal Processor and peripherals, including IEEE-1355 links. With the addition of appropriate physical layer interfaces and software on the DSP, the SpaceWire link is used to communicate between processors on the test bed, e.g. sending timing references, commands, status, and science data among the processors. Results are presented on development issues surrounding the use of SpaceWire in this environment, from physical layer implementation (cables, connectors, LVDS drivers) to diagnostic tools, driver firmware, and development methodology. The tools, methods, and hardware, software challenges and preliminary performance are investigated and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa-Paredes, Gilberto; Prieto-Guerrero, Alfonso; Nunez-Carrera, Alejandro
This paper introduces a wavelet-based method to analyze instability events in a boiling water reactor (BWR) during transient phenomena. The methodology to analyze BWR signals includes the following: (a) the short-time Fourier transform (STFT) analysis, (b) decomposition using the continuous wavelet transform (CWT), and (c) application of multiresolution analysis (MRA) using discrete wavelet transform (DWT). STFT analysis permits the study, in time, of the spectral content of analyzed signals. The CWT provides information about ruptures, discontinuities, and fractal behavior. To detect these important features in the signal, a mother wavelet has to be chosen and applied at several scales tomore » obtain optimum results. MRA allows fast implementation of the DWT. Features like important frequencies, discontinuities, and transients can be detected with analysis at different levels of detail coefficients. The STFT was used to provide a comparison between a classic method and the wavelet-based method. The damping ratio, which is an important stability parameter, was calculated as a function of time. The transient behavior can be detected by analyzing the maximum contained in detail coefficients at different levels in the signal decomposition. This method allows analysis of both stationary signals and highly nonstationary signals in the timescale plane. This methodology has been tested with the benchmark power instability event of Laguna Verde nuclear power plant (NPP) Unit 1, which is a BWR-5 NPP.« less
Next Generation Tissue Engineering of Orthopedic Soft Tissue-to-Bone Interfaces.
Boys, Alexander J; McCorry, Mary Clare; Rodeo, Scott; Bonassar, Lawrence J; Estroff, Lara A
2017-09-01
Soft tissue-to-bone interfaces are complex structures that consist of gradients of extracellular matrix materials, cell phenotypes, and biochemical signals. These interfaces, called entheses for ligaments, tendons, and the meniscus, are crucial to joint function, transferring mechanical loads and stabilizing orthopedic joints. When injuries occur to connected soft tissue, the enthesis must be re-established to restore function, but due to structural complexity, repair has proven challenging. Tissue engineering offers a promising solution for regenerating these tissues. This prospective review discusses methodologies for tissue engineering the enthesis, outlined in three key design inputs: materials processing methods, cellular contributions, and biochemical factors.
System perspectives for mobile platform design in m-Health
NASA Astrophysics Data System (ADS)
Roveda, Janet M.; Fink, Wolfgang
2016-05-01
Advances in integrated circuit technologies have led to the integration of medical sensor front ends with data processing circuits, i.e., mobile platform design for wearable sensors. We discuss design methodologies for wearable sensor nodes and their applications in m-Health. From the user perspective, flexibility, comfort, appearance, fashion, ease-of-use, and visibility are key form factors. From the technology development point of view, high accuracy, low power consumption, and high signal to noise ratio are desirable features. From the embedded software design standpoint, real time data analysis algorithms, application and database interfaces are the critical components to create successful wearable sensor-based products.
NASA Technical Reports Server (NTRS)
Buffalano, C.; Fogleman, S.; Gielecki, M.
1976-01-01
A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.
Next Generation Tissue Engineering of Orthopedic Soft Tissue-to-Bone Interfaces
Boys, Alexander J.; McCorry, Mary Clare; Rodeo, Scott; Bonassar, Lawrence J.; Estroff, Lara A.
2017-01-01
Soft tissue-to-bone interfaces are complex structures that consist of gradients of extracellular matrix materials, cell phenotypes, and biochemical signals. These interfaces, called entheses for ligaments, tendons, and the meniscus, are crucial to joint function, transferring mechanical loads and stabilizing orthopedic joints. When injuries occur to connected soft tissue, the enthesis must be re-established to restore function, but due to structural complexity, repair has proven challenging. Tissue engineering offers a promising solution for regenerating these tissues. This prospective review discusses methodologies for tissue engineering the enthesis, outlined in three key design inputs: materials processing methods, cellular contributions, and biochemical factors. PMID:29333332
Arc Fault Detection & Localization by Electromagnetic-Acoustic Remote Sensing
NASA Astrophysics Data System (ADS)
Vasile, C.; Ioana, C.
2017-05-01
Electrical arc faults that occur in photovoltaic systems represent a danger due to their economic impact on production and distribution. In this paper we propose a complete system, with focus on the methodology, that enables the detection and localization of the arc fault, by the use of an electromagnetic-acoustic sensing system. By exploiting the multiple emissions of the arc fault, in conjunction with a real-time detection signal processing method, we ensure accurate detection and localization. In its final form, this present work will present in greater detail the complete system, the methods employed, results and performance, alongside further works that will be carried on.
Efficient Surface Enhanced Raman Scattering substrates from femtosecond laser based fabrication
NASA Astrophysics Data System (ADS)
Parmar, Vinod; Kanaujia, Pawan K.; Bommali, Ravi Kumar; Vijaya Prakash, G.
2017-10-01
A fast and simple femtosecond laser based methodology for efficient Surface Enhanced Raman Scattering (SERS) substrate fabrication has been proposed. Both nano scaffold silicon (black silicon) and gold nanoparticles (Au-NP) are fabricated by femtosecond laser based technique for mass production. Nano rough silicon scaffold enables large electromagnetic fields for the localized surface plasmons from decorated metallic nanoparticles. Thus giant enhancement (approximately in the order of 104) of Raman signal arises from the mixed effects of electron-photon-phonon coupling, even at nanomolar concentrations of test organic species (Rhodamine 6G). Proposed process demonstrates the low-cost and label-less application ability from these large-area SERS substrates.
Spectral fractionation detection of gold nanorod contrast agents using optical coherence tomography
Jia, Yali; Liu, Gangjun; Gordon, Andrew Y.; Gao, Simon S.; Pechauer, Alex D.; Stoddard, Jonathan; McGill, Trevor J.; Jayagopal, Ashwath; Huang, David
2015-01-01
We demonstrate the proof of concept of a novel Fourier-domain optical coherence tomography contrast mechanism using gold nanorod contrast agents and a spectral fractionation processing technique. The methodology detects the spectral shift of the backscattered light from the nanorods by comparing the ratio between the short and long wavelength halves of the optical coherence tomography signal intensity. Spectral fractionation further divides the halves into sub-bands to improve spectral contrast and suppress speckle noise. Herein, we show that this technique can detect gold nanorods in intralipid tissue phantoms. Furthermore, cellular labeling by gold nanorods was demonstrated using retinal pigment epithelial cells in vitro. PMID:25836459
NASA Astrophysics Data System (ADS)
Vautier, Camille; Chatton, Eliot; Abbott, Benjamin; Harjung, Astrid; Labasque, Thierry; Guillou, Aurélie; Pannard, Alexandrine; Piscart, Christophe; Laverman, Anniet; Kolbe, Tamara; Massé, Stéphanie; de Dreuzy, Jean-Raynald; Thomas, Zahra; Aquilina, Luc; Pinay, Gilles
2017-04-01
Water quality in rivers results from biogeochemical processes in contributing hydrological compartments (soils, aquifers, hyporheic and riparian zones) and biochemical activity in the river network itself. Consequently, chemical fluxes fluctuate on multiple spatial and temporal scales, leading eventually to complex concentration signals in rivers. We characterized these fluctuations with innovative continuous monitoring of dissolved gases, to quantify transport and reaction processes occurring in different hydrological compartments. We performed stream-scale experiments in two headwater streams in Brittany, France. Factorial injections of inorganic nitrogen (NH4NO3), inorganic phosphate (P2O5) and multiple sources of labile carbon (acetate, tryptophan) were implemented in the two streams. We used a new field application of membrane inlet mass spectrometry to continuously monitor dissolved gases for multiple day-night periods (Chatton et al., 2016). Quantified gases included He, O2, N2, CO2, CH4, N2O, and 15N of dissolved N2 and N2O. We calibrated and assessed the methodology with well-established complementary techniques including gas chromatography and high-frequency water quality sensors. Wet chemistry and radon analysis complemented the study. The analyses provided several methodological and ecological insights and demonstrated that high frequency variations linked to background noise can be efficiently determined and filtered to derive effective fluxes. From a more fundamental point of view, the tested stream segments were fully characterized with extensive sampling of riverbeds and laboratory experiments, allowing scaling of point-level microbial and invertebrate diversity and activity on in-stream processing. This innovative technology allows fully-controlled in-situ experiments providing rich information with a high signal to noise ratio. We present the integrated nutrient demand and uptake and discuss limiting processes and elements at the reach and catchment scales. Eliot Chatton, Thierry Labasque, Jérôme de La Bernardie, Nicolas Guihéneuf, Olivier Bour, Luc Aquilina. 2016. Field Continuous Measurement of Dissolved Gases with a CF-MIMS: Applications to the Physics and Biogeochemistry of Groundwater Flow. Environ. Sci. Technol.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
[Nurses' perceptions of the vulnerabilities to STD/AIDS in light of the process of adolescence].
Silva, Ítalo Rodolfo; Gomes, Antonio Marcos Tosoli; Valadares, Glaucia Valente; dos Santos, Nereida Lúcia Palko; da Silva, Thiago Privado; Leite, Joséte Luzia
2015-09-01
to understand the perception of nurses on the vulnerabilities to STD/AIDS in light of the process of adolescence. qualitative research conducted with 15 nurses in a centre for the studies of adolescent healthcare of a university hospital in Rio de Janeiro/Brazil. The adopted theoretical and methodological frameworks were the Complexity Theory and the Grounded Theory, respectively. The semi-structured interview was used to collect data from January to August 2012. this research presents the category: Nurses' perceptions of the vulnerabilities to STD/AIDS in light of the process of adolescence, and the subcategories: Risks and uncertainties of the process of adolescence: paths to STD/AIDS; Age-adolescence complex: expanding knowledge from the perception of nurses. once the nurses understand the complexity of adolescence, they create strategies to reduce the vulnerability of adolescents to STD/AIDS. This signals the need to invest in education, assistance and the management of nursing care for adolescents.
NASA Astrophysics Data System (ADS)
Purwins, Hendrik; Herrera, Perfecto; Grachten, Maarten; Hazan, Amaury; Marxer, Ricard; Serra, Xavier
2008-09-01
We present a review on perception and cognition models designed for or applicable to music. An emphasis is put on computational implementations. We include findings from different disciplines: neuroscience, psychology, cognitive science, artificial intelligence, and musicology. The article summarizes the methodology that these disciplines use to approach the phenomena of music understanding, the localization of musical processes in the brain, and the flow of cognitive operations involved in turning physical signals into musical symbols, going from the transducers to the memory systems of the brain. We discuss formal models developed to emulate, explain and predict phenomena involved in early auditory processing, pitch processing, grouping, source separation, and music structure computation. We cover generic computational architectures of attention, memory, and expectation that can be instantiated and tuned to deal with specific musical phenomena. Criteria for the evaluation of such models are presented and discussed. Thereby, we lay out the general framework that provides the basis for the discussion of domain-specific music models in Part II.
A method for identifying EMI critical circuits during development of a large C3
NASA Astrophysics Data System (ADS)
Barr, Douglas H.
The circuit analysis methods and process Boeing Aerospace used on a large, ground-based military command, control, and communications (C3) system are described. This analysis was designed to help identify electromagnetic interference (EMI) critical circuits. The methodology used the MIL-E-6051 equipment criticality categories as the basis for defining critical circuits, relational database technology to help sort through and account for all of the approximately 5000 system signal cables, and Macintosh Plus personal computers to predict critical circuits based on safety margin analysis. The EMI circuit analysis process systematically examined all system circuits to identify which ones were likely to be EMI critical. The process used two separate, sequential safety margin analyses to identify critical circuits (conservative safety margin analysis, and detailed safety margin analysis). These analyses used field-to-wire and wire-to-wire coupling models using both worst-case and detailed circuit parameters (physical and electrical) to predict circuit safety margins. This process identified the predicted critical circuits that could then be verified by test.
Pathological speech signal analysis and classification using empirical mode decomposition.
Kaleem, Muhammad; Ghoraani, Behnaz; Guergachi, Aziz; Krishnan, Sridhar
2013-07-01
Automated classification of normal and pathological speech signals can provide an objective and accurate mechanism for pathological speech diagnosis, and is an active area of research. A large part of this research is based on analysis of acoustic measures extracted from sustained vowels. However, sustained vowels do not reflect real-world attributes of voice as effectively as continuous speech, which can take into account important attributes of speech such as rapid voice onset and termination, changes in voice frequency and amplitude, and sudden discontinuities in speech. This paper presents a methodology based on empirical mode decomposition (EMD) for classification of continuous normal and pathological speech signals obtained from a well-known database. EMD is used to decompose randomly chosen portions of speech signals into intrinsic mode functions, which are then analyzed to extract meaningful temporal and spectral features, including true instantaneous features which can capture discriminative information in signals hidden at local time-scales. A total of six features are extracted, and a linear classifier is used with the feature vector to classify continuous speech portions obtained from a database consisting of 51 normal and 161 pathological speakers. A classification accuracy of 95.7 % is obtained, thus demonstrating the effectiveness of the methodology.
Kepler AutoRegressive Planet Search
NASA Astrophysics Data System (ADS)
Caceres, Gabriel Antonio; Feigelson, Eric
2016-01-01
The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; AR-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. The analysis procedures of the project are applied to a portion of the publicly available Kepler light curve data for the full 4-year mission duration. Tests of the methods have been made on a subset of Kepler Objects of Interest (KOI) systems, classified both as planetary `candidates' and `false positives' by the Kepler Team, as well as a random sample of unclassified systems. We find that the ARMA-type modeling successfully reduces the stellar variability, by a factor of 10 or more in active stars and by smaller factors in more quiescent stars. A typical quiescent Kepler star has an interquartile range (IQR) of ~10 e-/sec, which may improve slightly after modeling, while those with IQR ranging from 20 to 50 e-/sec, have improvements from 20% up to 70%. High activity stars (IQR exceeding 100) markedly improve. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. Our findings to date on real-data tests of the KARPS methodology will be discussed including confirmation of some Kepler Team `candidate' planets. We also present cases of new possible planetary signals.
ERP correlates of error processing during performance on the Halstead Category Test.
Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F
2016-08-01
The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun
2013-03-01
A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.
Granados-Lieberman, David; Valtierra-Rodriguez, Martin; Morales-Hernandez, Luis A; Romero-Troncoso, Rene J; Osornio-Rios, Roque A
2013-04-25
Power quality disturbance (PQD) monitoring has become an important issue due to the growing number of disturbing loads connected to the power line and to the susceptibility of certain loads to their presence. In any real power system, there are multiple sources of several disturbances which can have different magnitudes and appear at different times. In order to avoid equipment damage and estimate the damage severity, they have to be detected, classified, and quantified. In this work, a smart sensor for detection, classification, and quantification of PQD is proposed. First, the Hilbert transform (HT) is used as detection technique; then, the classification of the envelope of a PQD obtained through HT is carried out by a feed forward neural network (FFNN). Finally, the root mean square voltage (Vrms), peak voltage (Vpeak), crest factor (CF), and total harmonic distortion (THD) indices calculated through HT and Parseval's theorem as well as an instantaneous exponential time constant quantify the PQD according to the disturbance presented. The aforementioned methodology is processed online using digital hardware signal processing based on field programmable gate array (FPGA). Besides, the proposed smart sensor performance is validated and tested through synthetic signals and under real operating conditions, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Harris, D; Myers, S
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3Dmore » finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications.« less
Cai, Weidong; Leung, Hoi-Chung
2011-01-01
Background The human inferior frontal cortex (IFC) is a large heterogeneous structure with distinct cytoarchitectonic subdivisions and fiber connections. It has been found involved in a wide range of executive control processes from target detection, rule retrieval to response control. Since these processes are often being studied separately, the functional organization of executive control processes within the IFC remains unclear. Methodology/Principal Findings We conducted an fMRI study to examine the activities of the subdivisions of IFC during the presentation of a task cue (rule retrieval) and during the performance of a stop-signal task (requiring response generation and inhibition) in comparison to a not-stop task (requiring response generation but not inhibition). We utilized a mixed event-related and block design to separate brain activity in correspondence to transient control processes from rule-related and sustained control processes. We found differentiation in control processes within the IFC. Our findings reveal that the bilateral ventral-posterior IFC/anterior insula are more active on both successful and unsuccessful stop trials relative to not-stop trials, suggesting their potential role in the early stage of stopping such as triggering the stop process. Direct countermanding seems to be outside of the IFC. In contrast, the dorsal-posterior IFC/inferior frontal junction (IFJ) showed transient activity in correspondence to the infrequent presentation of the stop signal in both tasks and the left anterior IFC showed differential activity in response to the task cues. The IFC subdivisions also exhibited similar but distinct patterns of functional connectivity during response control. Conclusions/Significance Our findings suggest that executive control processes are distributed across the IFC and that the different subdivisions of IFC may support different control operations through parallel cortico-cortical and cortico-striatal circuits. PMID:21673969
NASA Astrophysics Data System (ADS)
Gutierrez, Ronald R.; Abad, Jorge D.; Parsons, Daniel R.; Best, James L.
2013-09-01
There is no standard nomenclature and procedure to systematically identify the scale and magnitude of bed forms such as bars, dunes, and ripples that are commonly present in many sedimentary environments. This paper proposes a standardization of the nomenclature and symbolic representation of bed forms and details the combined application of robust spline filters and continuous wavelet transforms to discriminate these morphodynamic features, allowing the quantitative recognition of bed form hierarchies. Herein the proposed methodology for bed form discrimination is first applied to synthetic bed form profiles, which are sampled at a Nyquist ratio interval of 2.5-50 and a signal-to-noise ratio interval of 1-20 and subsequently applied to a detailed 3-D bed topography from the Río Paraná, Argentina, which exhibits large-scale dunes with superimposed, smaller bed forms. After discriminating the synthetic bed form signals into three-bed form hierarchies that represent bars, dunes, and ripples, the accuracy of the methodology is quantified by estimating the reproducibility, the cross correlation, and the standard deviation ratio of the actual and retrieved signals. For the case of the field measurements, the proposed method is used to discriminate small and large dunes and subsequently obtain and statistically analyze the common morphological descriptors such as wavelength, slope, and amplitude of both stoss and lee sides of these different size bed forms. Analysis of the synthetic signals demonstrates that the Morlet wavelet function is the most efficient in retrieving smaller periodicities such as ripples and smaller dunes and that the proposed methodology effectively discriminates waves of different periods for Nyquist ratios higher than 25 and signal-to-noise ratios higher than 5. The analysis of bed forms in the Río Paraná reveals that, in most cases, a Gamma probability distribution, with a positive skewness, best describes the dimensionless wavelength and amplitude for both the lee and stoss sides of large dunes. For the case of smaller superimposed dunes, the dimensionless wavelength shows a discrete behavior that is governed by the sampling frequency of the data, and the dimensionless amplitude better fits the Gamma probability distribution, again with a positive skewness. This paper thus provides a robust methodology for systematically identifying the scales and magnitudes of bed forms in a range of environments.
Conceptualization of the Complex Outcomes of Sexual Abuse: A Signal Detection Analysis
ERIC Educational Resources Information Center
Pechtel, Pia; Evans, Ian M.; Podd, John V.
2011-01-01
Eighty-five New Zealand based practitioners experienced in treating adults with a history of child sexual abuse participated in an online judgment study of child sexual abuse outcomes using signal detection theory methodology. Participants' level of sensitivity was assessed independent of their degree of response bias when discriminating (a) known…
UMTS signal measurements with digital spectrum analysers.
Licitra, G; Palazzuoli, D; Ricci, A S; Silvi, A M
2004-01-01
The launch of the Universal Mobile Telecommunications System (UMTS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed.
75 FR 56868 - Implementation of the Satellite Television Extension and Localism Act of 2010
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-17
.... (``Subsection (c) resolves the phantom signal ambiguity that required cable systems to pay royalty fees for... distant signals to some but not all communities to calculate royalty fees on the basis of the actual...'s computation of its royalty fee consistent with the methodology described in subparagraph (C)(iii...
NASA Astrophysics Data System (ADS)
Ghoraani, Behnaz; Krishnan, Sridhar
2009-12-01
The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.
Advances in Lipidomics for Cancer Biomarkers Discovery
Perrotti, Francesca; Rosa, Consuelo; Cicalini, Ilaria; Sacchetta, Paolo; Del Boccio, Piero; Genovesi, Domenico; Pieragostino, Damiana
2016-01-01
Lipids play critical functions in cellular survival, proliferation, interaction and death, since they are involved in chemical-energy storage, cellular signaling, cell membranes, and cell–cell interactions. These cellular processes are strongly related to carcinogenesis pathways, particularly to transformation, progression, and metastasis, suggesting the bioactive lipids are mediators of a number of oncogenic processes. The current review gives a synopsis of a lipidomic approach in tumor characterization; we provide an overview on potential lipid biomarkers in the oncology field and on the principal lipidomic methodologies applied. The novel lipidomic biomarkers are reviewed in an effort to underline their role in diagnosis, in prognostic characterization and in prediction of therapeutic outcomes. A lipidomic investigation through mass spectrometry highlights new insights on molecular mechanisms underlying cancer disease. This new understanding will promote clinical applications in drug discovery and personalized therapy. PMID:27916803
Discrete mathematics for spatial data classification and understanding
NASA Astrophysics Data System (ADS)
Mussio, Luigi; Nocera, Rossella; Poli, Daniela
1998-12-01
Data processing, in the field of information technology, requires new tools, involving discrete mathematics, like data compression, signal enhancement, data classification and understanding, hypertexts and multimedia (considering educational aspects too), because the mass of data implies automatic data management and doesn't permit any a priori knowledge. The methodologies and procedures used in this class of problems concern different kinds of segmentation techniques and relational strategies, like clustering, parsing, vectorization, formalization, fitting and matching. On the other hand, the complexity of this approach imposes to perform optimal sampling and outlier detection just at the beginning, in order to define the set of data to be processed: rough data supply very poor information. For these reasons, no hypotheses about the distribution behavior of the data can be generally done and a judgment should be acquired by distribution-free inference only.
Sanroman-Junquera, Margarita; Mora-Jimenez, Inmaculada; Garcia-Alberola, Arcadio; Caamano, Antonio J; Trenor, Beatriz; Rojo-Alvarez, Jose L
2018-04-01
Spatial and temporal processing of intracardiac electrograms provides relevant information to support the arrhythmia ablation during electrophysiological studies. Current cardiac navigation systems (CNS) and electrocardiographic imaging (ECGI) build detailed 3-D electroanatomical maps (EAM), which represent the spatial anatomical distribution of bioelectrical features, such as activation time or voltage. We present a principled methodology for spectral analysis of both EAM geometry and bioelectrical feature in CNS or ECGI, including their spectral representation, cutoff frequency, or spatial sampling rate (SSR). Existing manifold harmonic techniques for spectral mesh analysis are adapted to account for a fourth dimension, corresponding to the EAM bioelectrical feature. Appropriate scaling is required to address different magnitudes and units. With our approach, simulated and real EAM showed strong SSR dependence on both the arrhythmia mechanism and the cardiac anatomical shape. For instance, high frequencies increased significantly the SSR because of the "early-meets-late" in flutter EAM, compared with the sinus rhythm. Besides, higher frequency components were obtained for the left atrium (more complex anatomy) than for the right atrium in sinus rhythm. The proposed manifold harmonics methodology opens the field toward new signal processing tools for principled EAM spatiofeature analysis in CNS and ECGI, and to an improved knowledge on arrhythmia mechanisms.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Ertl, Peter; Kruse, Annika; Tilp, Markus
2016-10-01
The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
French, Deborah; Smith, Andrew; Powers, Martin P; Wu, Alan H B
2011-08-17
Binding of a ligand to the epidermal growth factor receptor (EGFR) stimulates various intracellular signaling pathways resulting in cell cycle progression, proliferation, angiogenesis and apoptosis inhibition. KRAS is involved in signaling pathways including RAF/MAPK and PI3K and mutations in this gene result in constitutive activation of these pathways, independent of EGFR activation. Seven mutations in codons 12 and 13 of KRAS comprise around 95% of the observed human mutations, rendering monoclonal antibodies against EGFR (e.g. cetuximab and panitumumab) useless in treatment of colorectal cancer. KRAS mutation testing by two different methodologies was compared; Sanger sequencing and AutoGenomics INFINITI® assay, on DNA extracted from colorectal cancers. Out of 29 colorectal tumor samples tested, 28 were concordant between the two methodologies for the KRAS mutations that were detected in both assays with the INFINITI® assay detecting a mutation in one sample that was indeterminate by Sanger sequencing and a third methodology; single nucleotide primer extension. This study indicates the utility of the AutoGenomics INFINITI® methodology in a clinical laboratory setting where technical expertise or access to equipment for DNA sequencing does not exist. Copyright © 2011 Elsevier B.V. All rights reserved.
Intelligent systems/software engineering methodology - A process to manage cost and risk
NASA Technical Reports Server (NTRS)
Friedlander, Carl; Lehrer, Nancy
1991-01-01
A systems development methodology is discussed that has been successfully applied to the construction of a number of intelligent systems. This methodology is a refinement of both evolutionary and spiral development methodologies. It is appropriate for development of intelligent systems. The application of advanced engineering methodology to the development of software products and intelligent systems is an important step toward supporting the transition of AI technology into aerospace applications. A description of the methodology and the process model from which it derives is given. Associated documents and tools are described which are used to manage the development process and record and report the emerging design.
Temporal Control over Transient Chemical Systems using Structurally Diverse Chemical Fuels.
Chen, Jack L-Y; Maiti, Subhabrata; Fortunati, Ilaria; Ferrante, Camilla; Prins, Leonard J
2017-08-25
The next generation of adaptive, intelligent chemical systems will rely on a continuous supply of energy to maintain the functional state. Such systems will require chemical methodology that provides precise control over the energy dissipation process, and thus, the lifetime of the transiently activated function. This manuscript reports on the use of structurally diverse chemical fuels to control the lifetime of two different systems under dissipative conditions: transient signal generation and the transient formation of self-assembled aggregates. The energy stored in the fuels is dissipated at different rates by an enzyme, which installs a dependence of the lifetime of the active system on the chemical structure of the fuel. In the case of transient signal generation, it is shown that different chemical fuels can be used to generate a vast range of signal profiles, allowing temporal control over two orders of magnitude. Regarding self-assembly under dissipative conditions, the ability to control the lifetime using different fuels turns out to be particularly important as stable aggregates are formed only at well-defined surfactant/fuel ratios, meaning that temporal control cannot be achieved by simply changing the fuel concentration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A signal processing based analysis and prediction of seizure onset in patients with epilepsy
Namazi, Hamidreza; Kulish, Vladimir V.
2016-01-01
One of the main areas of behavioural neuroscience is forecasting the human behaviour. Epilepsy is a central nervous system disorder in which nerve cell activity in the brain becomes disrupted, causing seizures or periods of unusual behaviour, sensations and sometimes loss of consciousness. An estimated 5% of the world population has epileptic seizure but there is not any method to cure it. More than 30% of people with epilepsy cannot control seizure. Epileptic seizure prediction, refers to forecasting the occurrence of epileptic seizures, is one of the most important but challenging problems in biomedical sciences, across the world. In this research we propose a new methodology which is based on studying the EEG signals using two measures, the Hurst exponent and fractal dimension. In order to validate the proposed method, it is applied to epileptic EEG signals of patients by computing the Hurst exponent and fractal dimension, and then the results are validated versus the reference data. The results of these analyses show that we are able to forecast the onset of a seizure on average of 25.76 seconds before the time of occurrence. PMID:26586477
Cyran, Krzysztof A.
2018-01-01
This work considers the problem of utilizing electroencephalographic signals for use in systems designed for monitoring and enhancing the performance of aircraft pilots. Systems with such capabilities are generally referred to as cognitive cockpits. This article provides a description of the potential that is carried by such systems, especially in terms of increasing flight safety. Additionally, a neuropsychological background of the problem is presented. Conducted research was focused mainly on the problem of discrimination between states of brain activity related to idle but focused anticipation of visual cue and reaction to it. Especially, a problem of selecting a proper classification algorithm for such problems is being examined. For that purpose an experiment involving 10 subjects was planned and conducted. Experimental electroencephalographic data was acquired using an Emotiv EPOC+ headset. Proposed methodology involved use of a popular method in biomedical signal processing, the Common Spatial Pattern, extraction of bandpower features, and an extensive test of different classification algorithms, such as Linear Discriminant Analysis, k-nearest neighbors, and Support Vector Machines with linear and radial basis function kernels, Random Forests, and Artificial Neural Networks. PMID:29849544
NASA Astrophysics Data System (ADS)
Nazrul Islam, Mohammed; Karim, Mohammad A.; Vijayan Asari, K.
2013-09-01
Protecting and processing of confidential information, such as personal identification, biometrics, remains a challenging task for further research and development. A new methodology to ensure enhanced security of information in images through the use of encryption and multiplexing is proposed in this paper. We use orthogonal encoding scheme to encode multiple information independently and then combine them together to save storage space and transmission bandwidth. The encoded and multiplexed image is encrypted employing multiple reference-based joint transform correlation. The encryption key is fed into four channels which are relatively phase shifted by different amounts. The input image is introduced to all the channels and then Fourier transformed to obtain joint power spectra (JPS) signals. The resultant JPS signals are again phase-shifted and then combined to form a modified JPS signal which yields the encrypted image after having performed an inverse Fourier transformation. The proposed cryptographic system makes the confidential information absolutely inaccessible to any unauthorized intruder, while allows for the retrieval of the information to the respective authorized recipient without any distortion. The proposed technique is investigated through computer simulations under different practical conditions in order to verify its overall robustness.
Condensing Raman spectrum for single-cell phenotype analysis.
Sun, Shiwei; Wang, Xuetao; Gao, Xin; Ren, Lihui; Su, Xiaoquan; Bu, Dongbo; Ning, Kang
2015-01-01
In recent years, high throughput and non-invasive Raman spectrometry technique has matured as an effective approach to identification of individual cells by species, even in complex, mixed populations. Raman profiling is an appealing optical microscopic method to achieve this. To fully utilize Raman proling for single-cell analysis, an extensive understanding of Raman spectra is necessary to answer questions such as which filtering methodologies are effective for pre-processing of Raman spectra, what strains can be distinguished by Raman spectra, and what features serve best as Raman-based biomarkers for single-cells, etc. In this work, we have proposed an approach called rDisc to discretize the original Raman spectrum into only a few (usually less than 20) representative peaks (Raman shifts). The approach has advantages in removing noises, and condensing the original spectrum. In particular, effective signal processing procedures were designed to eliminate noise, utilising wavelet transform denoising, baseline correction, and signal normalization. In the discretizing process, representative peaks were selected to signicantly decrease the Raman data size. More importantly, the selected peaks are chosen as suitable to serve as key biological markers to differentiate species and other cellular features. Additionally, the classication performance of discretized spectra was found to be comparable to full spectrum having more than 1000 Raman shifts. Overall, the discretized spectrum needs about 5storage space of a full spectrum and the processing speed is considerably faster. This makes rDisc clearly superior to other methods for single-cell classication.
Senesi, Pamela; Luzi, Livio; Montesano, Anna; Mazzocchi, Nausicaa; Terruzzi, Ileana
2013-07-19
Betaine (BET) is a component of many foods, including spinach and wheat. It is an essential osmolyte and a source of methyl groups. Recent studies have hypothesized that BET might play a role in athletic performance. However, BET effects on skeletal muscle differentiation and hypertrophy are still poorly understood. We examined BET action on neo myotubes maturation and on differentiation process, using C2C12 murine myoblastic cells. We used RT2-PCR array, Western blot and immunofluorescence analysis to study the BET effects on morphological features of C2C12 and on signaling pathways involved in muscle differentiation and hypertrophy. We performed a dose-response study, establishing that 10 mM BET was the dose able to stimulate morphological changes and hypertrophic process in neo myotubes. RT2-PCR array methodology was used to identify the expression profile of genes encoding proteins involved in IGF-1 pathway. A dose of 10 mM BET was found to promote IGF-1 receptor (IGF-1 R) expression. Western blot and immunofluorescence analysis, performed in neo myotubes, pointed out that 10 mM BET improved IGF-1 signaling, synthesis of Myosin Heavy Chain (MyHC) and neo myotubes length. Our findings provide the first evidence that BET could promote muscle fibers differentiation and increase myotubes size by IGF-1 pathway activation, suggesting that BET might represent a possible new drug/integrator strategy, not only in sport performance but also in clinical conditions characterized by muscle function impairment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.
2006-03-16
The Hazardous Materials Response Unit (HMRU) and the Counterterrorism and Forensic Science Research Unit (CTFSRU), Laboratory Division, Federal Bureau of Investigation (FBI) have been mandated to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a portable, hand-held, hazardous materials acoustic inspection device (HAZAID) that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as wellmore » as container sizes and materials, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The HAZAID prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the HAZAID prototype. High bandwidth ultrasonic transducers combined with the advanced pulse compression technique allowed researchers to 1) impart large amounts of energy, 2) obtain high signal-to-noise ratios, and 3) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of this feasibility study demonstrated that the HAZAID experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.« less
Web-based health services and clinical decision support.
Jegelevicius, Darius; Marozas, Vaidotas; Lukosevicius, Arunas; Patasius, Martynas
2004-01-01
The purpose of this study was the development of a Web-based e-health service for comprehensive assistance and clinical decision support. The service structure consists of a Web server, a PHP-based Web interface linked to a clinical SQL database, Java applets for interactive manipulation and visualization of signals and a Matlab server linked with signal and data processing algorithms implemented by Matlab programs. The service ensures diagnostic signal- and image analysis-sbased clinical decision support. By using the discussed methodology, a pilot service for pathology specialists for automatic calculation of the proliferation index has been developed. Physicians use a simple Web interface for uploading the pictures under investigation to the server; subsequently a Java applet interface is used for outlining the region of interest and, after processing on the server, the requested proliferation index value is calculated. There is also an "expert corner", where experts can submit their index estimates and comments on particular images, which is especially important for system developers. These expert evaluations are used for optimization and verification of automatic analysis algorithms. Decision support trials have been conducted for ECG and ophthalmology ultrasonic investigations of intraocular tumor differentiation. Data mining algorithms have been applied and decision support trees constructed. These services are under implementation by a Web-based system too. The study has shown that the Web-based structure ensures more effective, flexible and accessible services compared with standalone programs and is very convenient for biomedical engineers and physicians, especially in the development phase.
Wu, Changgong; Parrott, Andrew M.; Fu, Cexiong; Liu, Tong; Marino, Stefano M.; Gladyshev, Vadim N.; Jain, Mohit R.; Baykal, Ahmet T.; Li, Qing; Oka, Shinichi; Sadoshima, Junichi; Beuve, Annie; Simmons, William J.
2011-01-01
Abstract Despite the significance of redox post-translational modifications (PTMs) in regulating diverse signal transduction pathways, the enzymatic systems that catalyze reversible and specific oxidative or reductive modifications have yet to be firmly established. Thioredoxin 1 (Trx1) is a conserved antioxidant protein that is well known for its disulfide reductase activity. Interestingly, Trx1 is also able to transnitrosylate or denitrosylate (defined as processes to transfer or remove a nitric oxide entity to/from substrates) specific proteins. An intricate redox regulatory mechanism has recently been uncovered that accounts for the ability of Trx1 to catalyze these different redox PTMs. In this review, we will summarize the available evidence in support of Trx1 as a specific disulfide reductase, and denitrosylation and transnitrosylation agent, as well as the biological significance of the diverse array of Trx1-regulated pathways and processes under different physiological contexts. The dramatic progress in redox proteomics techniques has enabled the identification of an increasing number of proteins, including peroxiredoxin 1, whose disulfide bond formation and nitrosylation status are regulated by Trx1. This review will also summarize the advancements of redox proteomics techniques for the identification of the protein targets of Trx1-mediated PTMs. Collectively, these studies have shed light on the mechanisms that regulate Trx1-mediated reduction, transnitrosylation, and denitrosylation of specific target proteins, solidifying the role of Trx1 as a master regulator of redox signal transduction. Antioxid. Redox Signal. 15, 2565–2604. PMID:21453190
GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.
2015-12-01
The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.
Exercise redox biochemistry: Conceptual, methodological and technical recommendations.
Cobley, James N; Close, Graeme L; Bailey, Damian M; Davison, Gareth W
2017-08-01
Exercise redox biochemistry is of considerable interest owing to its translational value in health and disease. However, unaddressed conceptual, methodological and technical issues complicate attempts to unravel how exercise alters redox homeostasis in health and disease. Conceptual issues relate to misunderstandings that arise when the chemical heterogeneity of redox biology is disregarded: which often complicates attempts to use redox-active compounds and assess redox signalling. Further, that oxidised macromolecule adduct levels reflect formation and repair is seldom considered. Methodological and technical issues relate to the use of out-dated assays and/or inappropriate sample preparation techniques that confound biochemical redox analysis. After considering each of the aforementioned issues, we outline how each issue can be resolved and provide a unifying set of recommendations. We specifically recommend that investigators: consider chemical heterogeneity, use redox-active compounds judiciously, abandon flawed assays, carefully prepare samples and assay buffers, consider repair/metabolism, use multiple biomarkers to assess oxidative damage and redox signalling. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Quantum Dots Applied to Methodology on Detection of Pesticide and Veterinary Drug Residues.
Zhou, Jia-Wei; Zou, Xue-Mei; Song, Shang-Hong; Chen, Guan-Hua
2018-02-14
The pesticide and veterinary drug residues brought by large-scale agricultural production have become one of the issues in the fields of food safety and environmental ecological security. It is necessary to develop the rapid, sensitive, qualitative and quantitative methodology for the detection of pesticide and veterinary drug residues. As one of the achievements of nanoscience, quantum dots (QDs) have been widely used in the detection of pesticide and veterinary drug residues. In these methodology studies, the used QD-signal styles include fluorescence, chemiluminescence, electrochemical luminescence, photoelectrochemistry, etc. QDs can also be assembled into sensors with different materials, such as QD-enzyme, QD-antibody, QD-aptamer, and QD-molecularly imprinted polymer sensors, etc. Plenty of study achievements in the field of detection of pesticide and veterinary drug residues have been obtained from the different combinations among these signals and sensors. They are summarized in this paper to provide a reference for the QD application in the detection of pesticide and veterinary drug residues.
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Lüttjohann, Annika; Makarov, Vladimir V.; Goremyko, Mikhail V.; Koronovskii, Alexey A.; Nedaivozov, Vladimir; Runnova, Anastasia E.; van Luijtelaar, Gilles; Hramov, Alexander E.; Boccaletti, Stefano
2017-07-01
We introduce a practical and computationally not demanding technique for inferring interactions at various microscopic levels between the units of a network from the measurements and the processing of macroscopic signals. Starting from a network model of Kuramoto phase oscillators, which evolve adaptively according to homophilic and homeostatic adaptive principles, we give evidence that the increase of synchronization within groups of nodes (and the corresponding formation of synchronous clusters) causes also the defragmentation of the wavelet energy spectrum of the macroscopic signal. Our methodology is then applied to getting a glance into the microscopic interactions occurring in a neurophysiological system, namely, in the thalamocortical neural network of an epileptic brain of a rat, where the group electrical activity is registered by means of multichannel EEG. We demonstrate that it is possible to infer the degree of interaction between the interconnected regions of the brain during different types of brain activities and to estimate the regions' participation in the generation of the different levels of consciousness.
Dipalo, Michele; Amin, Hayder; Lovato, Laura; Moia, Fabio; Caprettini, Valeria; Messina, Gabriele C; Tantussi, Francesco; Berdondini, Luca; De Angelis, Francesco
2017-06-14
Three-dimensional vertical micro- and nanostructures can enhance the signal quality of multielectrode arrays and promise to become the prime methodology for the investigation of large networks of electrogenic cells. So far, access to the intracellular environment has been obtained via spontaneous poration, electroporation, or by surface functionalization of the micro/nanostructures; however, these methods still suffer from some limitations due to their intrinsic characteristics that limit their widespread use. Here, we demonstrate the ability to continuously record both extracellular and intracellular-like action potentials at each electrode site in spontaneously active mammalian neurons and HL-1 cardiac-derived cells via the combination of vertical nanoelectrodes with plasmonic optoporation. We demonstrate long-term and stable recordings with a very good signal-to-noise ratio. Additionally, plasmonic optoporation does not perturb the spontaneous electrical activity; it permits continuous recording even during the poration process and can regulate extracellular and intracellular contributions by means of partial cellular poration.
Paul, Sabyasachi; Sarkar, P K
2013-04-01
Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.
Chemically-inducible diffusion trap at cilia (C-IDTc) reveals molecular sieve-like barrier
Lin, Yu-Chun; Phua, Siew Cheng; Jiao, John; Levchenko, Andre; Inoue, Takafumi; Rohatgi, Rajat; Inoue, Takanari
2013-01-01
Primary cilia function as specialized compartments for signal transduction. The stereotyped structure and signaling function of cilia inextricably depend on the selective segregation of molecules in cilia. However, the fundamental principles governing the access of soluble proteins to primary cilia remain unresolved. We developed a methodology termed Chemically-Inducible Diffusion Trap at Cilia (C-IDTc) to visualize the diffusion process of a series of fluorescent proteins ranging in size from 3.2 to 7.9 nm into primary cilia. We found that the interior of the cilium was accessible to proteins as large as 7.9 nm. The kinetics of ciliary accumulation of this panel of proteins was exponentially limited by their Stokes radii. Quantitative modeling suggests that the diffusion barrier operates as a molecular sieve at the base of cilia. Our study presents a set of powerful, generally applicable tools for the quantitative monitoring of ciliary protein diffusion under both physiological and pathological conditions. PMID:23666116
NASA Technical Reports Server (NTRS)
Nuckolls, C.; Frank, Mark
1990-01-01
The overall goal of this study was to develop new concepts and technology for the Comet Rendezvous Asteroid Flyby (CRAF), Cassini, and other future deep space missions which maximally conform to the Functional Specification for the NASA X-Band Transponder (NXT), FM513778 (preliminary, revised July 26, 1988). The study is composed of two tasks. The first task was to investigate a new digital signal processing technique which involves the processing of 1-bit samples and has the potential for significant size, mass, power, and electrical performance improvements over conventional analog approaches. The entire X-band receiver tracking loop was simulated on a digital computer using a high-level programming language. Simulations on this 'software breadboard' showed the technique to be well-behaved and a good approximation to its analog predecessor from threshold to strong signal levels in terms of tracking-loop performance, command signal-to-noise ratio and ranging signal-to-noise ratio. The successful completion of this task paves the way for building a hardware breadboard, the recommended next step in confirming this approach is ready for incorporation into flight hardware. The second task in this study was to investigate another technique which provides considerable simplification in the synthesis of the receiver first LO over conventional phase-locked multiplier schemes and in this approach, provides down-conversion for an S-band emergency receive mode without the need of an additional LO. The objective of this study was to develop methodology and models to predict the conversion loss, input RF bandwidth, and output RF bandwidth of a series GaAs FET sampling mixer and to breadboard and test a circuit design suitable for the X and S-band down-conversion applications.
Hole-in-One Mutant Phenotypes Link EGFR/ERK Signaling to Epithelial Tissue Repair in Drosophila
Campos, Isabel; Santos, Ana Catarina; Jacinto, Antonio
2011-01-01
Background Epithelia act as physical barriers protecting living organisms and their organs from the surrounding environment. Simple epithelial tissues have the capacity to efficiently repair wounds through a resealing mechanism. The known molecular mechanisms underlying this process appear to be conserved in both vertebrates and invertebrates, namely the involvement of the transcription factors Grainy head (Grh) and Fos. In Drosophila, Grh and Fos lead to the activation of wound response genes required for epithelial repair. ERK is upstream of this pathway and known to be one of the first kinases to be activated upon wounding. However, it is still unclear how ERK activation contributes to a proper wound response and which molecular mechanisms regulate its activation. Methodology/Principal Findings In a previous screen, we isolated mutants with defects in wound healing. Here, we describe the role of one of these genes, hole-in-one (holn1), in the wound healing process. Holn1 is a GYF domain containing protein that we found to be required for the activation of several Grh and Fos regulated wound response genes at the wound site. We also provide evidence suggesting that Holn1 may be involved in the Ras/ERK signaling pathway, by acting downstream of ERK. Finally, we show that wound healing requires the function of EGFR and ERK signaling. Conclusions/Significance Based on these data, we conclude that holn1 is a novel gene required for a proper wound healing response. We further propose and discuss a model whereby Holn1 acts downstream of EGFR and ERK signaling in the Grh/Fos mediated wound closure pathway. PMID:22140578
On The Evidence For Large-Scale Galactic Conformity In The Local Universe
NASA Astrophysics Data System (ADS)
Sin, Larry P. T.; Lilly, Simon J.; Henriques, Bruno M. B.
2017-10-01
We re-examine the observational evidence for large-scale (4 Mpc) galactic conformity in the local Universe, as presented in Kauffmann et al. We show that a number of methodological features of their analysis act to produce a misleadingly high amplitude of the conformity signal. These include a weighting in favour of central galaxies in very high density regions, the likely misclassification of satellite galaxies as centrals in the same high-density regions and the use of medians to characterize bimodal distributions. We show that the large-scale conformity signal in Kauffmann et al. clearly originates from a very small number of central galaxies in the vicinity of just a few very massive clusters, whose effect is strongly amplified by the methodological issues that we have identified. Some of these 'centrals' are likely misclassified satellites, but some may be genuine centrals showing a real conformity effect. Regardless, this analysis suggests that conformity on 4 Mpc scales is best viewed as a relatively short-range effect (at the virial radius) associated with these very large neighbouring haloes, rather than a very long-range effect (at tens of virial radii) associated with the relatively low-mass haloes that host the nominal central galaxies in the analysis. A mock catalogue constructed from a recent semi-analytic model shows very similar conformity effects to the data when analysed in the same way, suggesting that there is no need to introduce new physical processes to explain galactic conformity on 4 Mpc scales.
NASA Astrophysics Data System (ADS)
Holford, Karen M.; Eaton, Mark J.; Hensman, James J.; Pullin, Rhys; Evans, Sam L.; Dervilis, Nikolaos; Worden, Keith
2017-04-01
The acoustic emission (AE) phenomenon has many attributes that make it desirable as a structural health monitoring or non-destructive testing technique, including the capability to continuously and globally monitor large structures using a sparse sensor array and with no dependency on defect size. However, AE monitoring is yet to fulfil its true potential, due mainly to limitations in location accuracy and signal characterisation that often arise in complex structures with high levels of background noise. Furthermore, the technique has been criticised for a lack of quantitative results and the large amount of operator interpretation required during data analysis. This paper begins by introducing the challenges faced in developing an AE based structural health monitoring system and then gives a review of previous progress made in addresing these challenges. Subsequently an overview of a novel methodology for automatic detection of fatigue fractures in complex geometries and noisy environments is presented, which combines a number of signal processing techniques to address the current limitations of AE monitoring. The technique was developed for monitoring metallic landing gear components during pre-flight certification testing and results are presented from a full-scale steel landing gear component undergoing fatigue loading. Fracture onset was successfully identify automatically at 49,000 fatigue cycles prior to final failure (validated by the use of dye penetrant inspection) and the fracture position was located to within 10 mm of the actual location.
Wavelet Packet Analysis for Angular Data Extraction from Muscle Afferent Cuff Electrode Signals
2001-10-25
from rabbits. In order to estimate ankle flexion/extension angles, we recorded ENG signals from the left Tibial and Peroneal nerves, both during FES...afferent ENG. II. METHODOLOGY A. Experimental Setup Acute experiments were conducted with 2 female New Zealand rabbits. The rabbits were pre-anesthetized...fixating the knee and ankle joints in place (see [3] for more details) . For extracting the ENG signals, tripolar cuff electrodes were implanted onto the
Improved analytical methods for microarray-based genome-composition analysis
Kim, Charles C; Joyce, Elizabeth A; Chan, Kaman; Falkow, Stanley
2002-01-01
Background Whereas genome sequencing has given us high-resolution pictures of many different species of bacteria, microarrays provide a means of obtaining information on genome composition for many strains of a given species. Genome-composition analysis using microarrays, or 'genomotyping', can be used to categorize genes into 'present' and 'divergent' categories based on the level of hybridization signal. This typically involves selecting a signal value that is used as a cutoff to discriminate present (high signal) and divergent (low signal) genes. Current methodology uses empirical determination of cutoffs for classification into these categories, but this methodology is subject to several problems that can result in the misclassification of many genes. Results We describe a method that depends on the shape of the signal-ratio distribution and does not require empirical determination of a cutoff. Moreover, the cutoff is determined on an array-to-array basis, accounting for variation in strain composition and hybridization quality. The algorithm also provides an estimate of the probability that any given gene is present, which provides a measure of confidence in the categorical assignments. Conclusions Many genes previously classified as present using static methods are in fact divergent on the basis of microarray signal; this is corrected by our algorithm. We have reassigned hundreds of genes from previous genomotyping studies of Helicobacter pylori and Campylobacter jejuni strains, and expect that the algorithm should be widely applicable to genomotyping data. PMID:12429064
A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.
2011-01-01
Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.
Quantifying facial expression signal and intensity use during development.
Rodger, Helen; Lao, Junpeng; Caldara, Roberto
2018-06-12
Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
ERIC Educational Resources Information Center
Lauckner, Heidi; Paterson, Margo; Krupa, Terry
2012-01-01
Often, research projects are presented as final products with the methodologies cleanly outlined and little attention paid to the decision-making processes that led to the chosen approach. Limited attention paid to these decision-making processes perpetuates a sense of mystery about qualitative approaches, particularly for new researchers who will…
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
Phasic dopamine signals: from subjective reward value to formal economic utility
Schultz, Wolfram; Carelli, Regina M; Wightman, R Mark
2015-01-01
Although rewards are physical stimuli and objects, their value for survival and reproduction is subjective. The phasic, neurophysiological and voltammetric dopamine reward prediction error response signals subjective reward value. The signal incorporates crucial reward aspects such as amount, probability, type, risk, delay and effort. Differences of dopamine release dynamics with temporal delay and effort in rodents may derive from methodological issues and require further study. Recent designs using concepts and behavioral tools from experimental economics allow to formally characterize the subjective value signal as economic utility and thus to establish a neuronal value function. With these properties, the dopamine response constitutes a utility prediction error signal. PMID:26719853
Tsanas, Athanasios; Clifford, Gari D
2015-01-01
Sleep spindles are critical in characterizing sleep and have been associated with cognitive function and pathophysiological assessment. Typically, their detection relies on the subjective and time-consuming visual examination of electroencephalogram (EEG) signal(s) by experts, and has led to large inter-rater variability as a result of poor definition of sleep spindle characteristics. Hitherto, many algorithmic spindle detectors inherently make signal stationarity assumptions (e.g., Fourier transform-based approaches) which are inappropriate for EEG signals, and frequently rely on additional information which may not be readily available in many practical settings (e.g., more than one EEG channels, or prior hypnogram assessment). This study proposes a novel signal processing methodology relying solely on a single EEG channel, and provides objective, accurate means toward probabilistically assessing the presence of sleep spindles in EEG signals. We use the intuitively appealing continuous wavelet transform (CWT) with a Morlet basis function, identifying regions of interest where the power of the CWT coefficients corresponding to the frequencies of spindles (11-16 Hz) is large. The potential for assessing the signal segment as a spindle is refined using local weighted smoothing techniques. We evaluate our findings on two databases: the MASS database comprising 19 healthy controls and the DREAMS sleep spindle database comprising eight participants diagnosed with various sleep pathologies. We demonstrate that we can replicate the experts' sleep spindles assessment accurately in both databases (MASS database: sensitivity: 84%, specificity: 90%, false discovery rate 83%, DREAMS database: sensitivity: 76%, specificity: 92%, false discovery rate: 67%), outperforming six competing automatic sleep spindle detection algorithms in terms of correctly replicating the experts' assessment of detected spindles.
Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K
2018-06-01
Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.
Weiskopf, Nikolaus; Veit, Ralf; Erb, Michael; Mathiak, Klaus; Grodd, Wolfgang; Goebel, Rainer; Birbaumer, Niels
2003-07-01
A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.
NASA Astrophysics Data System (ADS)
Brookman, Tom; Whittaker, Thomas
2012-09-01
Stable isotope dendroclimatology using α-cellulose has unique potential to deliver multimillennial-scale, sub-annually resolved, terrestrial climate records. However, lengthy processing and analytical methods often preclude such reconstructions. Variants of the Brendel extraction method have reduced these limitations, providing fast, easy methods of isolating α-cellulose in some species. Here, we investigate application of Standard Brendel (SBrendel) variants to resinous soft-woods by treating samples of kauri (Agathis australis), ponderosa pine (Pinus ponderosa) and huon pine (Lagarastrobus franklinii), varying reaction vessel, temperature, boiling time and reagent volume. Numerous samples were visibly `under-processed' and Fourier Transform infrared spectroscopic (FTIR) investigation showed absorption peaks at 1520 cm-1 and ˜1600 cm-1 in those fibers suggesting residual lignin and retained resin respectively. Replicate analyses of all samples processed at high temperature yielded consistent δ13C and δ18O despite color and spectral variations. Spectra and isotopic data revealed that α-cellulose δ13C can be altered during processing, most likely due to chemical contamination from insufficient acetone removal, but is not systematically affected by methodological variation. Reagent amount, temperature and extraction time all influence δ18O, however, and our results demonstrate that different species may require different processing methods. FTIR prior to isotopic analysis is a fast and cost effective way to determine α-cellulose extract purity. Furthermore, a systematic isotopic test such as we present here can also determine sensitivity of isotopic values to methodological variables. Without these tests, isotopic variability introduced by the method could obscure or `create' climatic signals within a data set.
Yield impact for wafer shape misregistration-based binning for overlay APC diagnostic enhancement
NASA Astrophysics Data System (ADS)
Jayez, David; Jock, Kevin; Zhou, Yue; Govindarajulu, Venugopal; Zhang, Zhen; Anis, Fatima; Tijiwa-Birk, Felipe; Agarwal, Shivam
2018-03-01
The importance of traditionally acceptable sources of variation has started to become more critical as semiconductor technologies continue to push into smaller technology nodes. New metrology techniques are needed to pursue the process uniformity requirements needed for controllable lithography. Process control for lithography has the advantage of being able to adjust for cross-wafer variability, but this requires that all processes are close in matching between process tools/chambers for each process. When this is not the case, the cumulative line variability creates identifiable groups of wafers1 . This cumulative shape based effect is described as impacting overlay measurements and alignment by creating misregistration of the overlay marks. It is necessary to understand what requirements might go into developing a high volume manufacturing approach which leverages this grouping methodology, the key inputs and outputs, and what can be extracted from such an approach. It will be shown that this line variability can be quantified into a loss of electrical yield primarily at the edge of the wafer and proposes a methodology for root cause identification and improvement. This paper will cover the concept of wafer shape based grouping as a diagnostic tool for overlay control and containment, the challenges in implementing this in a manufacturing setting, and the limitations of this approach. This will be accomplished by showing that there are identifiable wafer shape based signatures. These shape based wafer signatures will be shown to be correlated to overlay misregistration, primarily at the edge. It will also be shown that by adjusting for this wafer shape signal, improvements can be made to both overlay as well as electrical yield. These improvements show an increase in edge yield, and a reduction in yield variability.
NASA Astrophysics Data System (ADS)
Jamal, Wasifa; Das, Saptarshi; Maharatna, Koushik; Pan, Indranil; Kuyucu, Doga
2015-09-01
Degree of phase synchronization between different Electroencephalogram (EEG) channels is known to be the manifestation of the underlying mechanism of information coupling between different brain regions. In this paper, we apply a continuous wavelet transform (CWT) based analysis technique on EEG data, captured during face perception tasks, to explore the temporal evolution of phase synchronization, from the onset of a stimulus. Our explorations show that there exists a small set (typically 3-5) of unique synchronized patterns or synchrostates, each of which are stable of the order of milliseconds. Particularly, in the beta (β) band, which has been reported to be associated with visual processing task, the number of such stable states has been found to be three consistently. During processing of the stimulus, the switching between these states occurs abruptly but the switching characteristic follows a well-behaved and repeatable sequence. This is observed in a single subject analysis as well as a multiple-subject group-analysis in adults during face perception. We also show that although these patterns remain topographically similar for the general category of face perception task, the sequence of their occurrence and their temporal stability varies markedly between different face perception scenarios (stimuli) indicating toward different dynamical characteristics for information processing, which is stimulus-specific in nature. Subsequently, we translated these stable states into brain complex networks and derived informative network measures for characterizing the degree of segregated processing and information integration in those synchrostates, leading to a new methodology for characterizing information processing in human brain. The proposed methodology of modeling the functional brain connectivity through the synchrostates may be viewed as a new way of quantitative characterization of the cognitive ability of the subject, stimuli and information integration/segregation capability.
Probabilistic resident space object detection using archival THEMIS fluxgate magnetometer data
NASA Astrophysics Data System (ADS)
Brew, Julian; Holzinger, Marcus J.
2018-05-01
Recent progress in the detection of small space objects, at geosynchronous altitudes, through ground-based optical and radar measurements is demonstrated as a viable method. However, in general, these methods are limited to detection of objects greater than 10 cm. This paper examines the use of magnetometers to detect plausible flyby encounters with charged space objects using a matched filter signal existence binary hypothesis test approach. Relevant data-set processing and reduction of archival fluxgate magnetometer data from the NASA THEMIS mission is discussed in detail. Using the proposed methodology and a false alarm rate of 10%, 285 plausible detections with probability of detection greater than 80% are claimed and several are reviewed in detail.
Diffusion Weighted Image Denoising Using Overcomplete Local PCA
Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2013-01-01
Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Cuthill, Innes C; Allen, William L; Arbuckle, Kevin; Caspers, Barbara; Chaplin, George; Hauber, Mark E; Hill, Geoffrey E; Jablonski, Nina G; Jiggins, Chris D; Kelber, Almut; Mappes, Johanna; Marshall, Justin; Merrill, Richard; Osorio, Daniel; Prum, Richard; Roberts, Nicholas W; Roulin, Alexandre; Rowland, Hannah M; Sherratt, Thomas N; Skelhorn, John; Speed, Michael P; Stevens, Martin; Stoddard, Mary Caswell; Stuart-Fox, Devi; Talas, Laszlo; Tibbetts, Elizabeth; Caro, Tim
2017-08-04
Coloration mediates the relationship between an organism and its environment in important ways, including social signaling, antipredator defenses, parasitic exploitation, thermoregulation, and protection from ultraviolet light, microbes, and abrasion. Methodological breakthroughs are accelerating knowledge of the processes underlying both the production of animal coloration and its perception, experiments are advancing understanding of mechanism and function, and measurements of color collected noninvasively and at a global scale are opening windows to evolutionary dynamics more generally. Here we provide a roadmap of these advances and identify hitherto unrecognized challenges for this multi- and interdisciplinary field. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
NASA Astrophysics Data System (ADS)
Beattie, T.; Lolos, G. J.; Papandreou, Z.; Semenov, A. Yu.; Teigrob, L. A.
2015-08-01
Large-area, multi-pixel photon counters will be used for the electromagnetic Barrel Calorimeter of the GlueX experiment at Jefferson Lab. These photo sensors are based on a 3 ×3 mm2 cell populated by 50 μm pixels, with 16 such cells tiled in a 4 ×4 arrangement in the array. The 16 cells are summed electronically and the signals are amplified. The photon detection efficiency of a group of first-article units at room temperature under conditions similar to those of the experiment was extracted to be (28 ±2(stat) ±2(syst))%, by employing an analysis methodology based on Poisson statistics carried out on the summed energy signals from the units.
Eye-Tracking as a Tool in Process-Oriented Reading Test Validation
ERIC Educational Resources Information Center
Solheim, Oddny Judith; Uppstad, Per Henning
2011-01-01
The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; McDougal, Matthew; Russell, Sam
2012-01-01
Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.
Decoding the Heart through Next Generation Sequencing Approaches.
Pawlak, Michal; Niescierowicz, Katarzyna; Winata, Cecilia Lanny
2018-06-07
: Vertebrate organs develop through a complex process which involves interaction between multiple signaling pathways at the molecular, cell, and tissue levels. Heart development is an example of such complex process which, when disrupted, results in congenital heart disease (CHD). This complexity necessitates a holistic approach which allows the visualization of genome-wide interaction networks, as opposed to assessment of limited subsets of factors. Genomics offers a powerful solution to address the problem of biological complexity by enabling the observation of molecular processes at a genome-wide scale. The emergence of next generation sequencing (NGS) technology has facilitated the expansion of genomics, increasing its output capacity and applicability in various biological disciplines. The application of NGS in various aspects of heart biology has resulted in new discoveries, generating novel insights into this field of study. Here we review the contributions of NGS technology into the understanding of heart development and its disruption reflected in CHD and discuss how emerging NGS based methodologies can contribute to the further understanding of heart repair.
NASA Astrophysics Data System (ADS)
Rambaldi, Marcello; Filimonov, Vladimir; Lillo, Fabrizio
2018-03-01
Given a stationary point process, an intensity burst is defined as a short time period during which the number of counts is larger than the typical count rate. It might signal a local nonstationarity or the presence of an external perturbation to the system. In this paper we propose a procedure for the detection of intensity bursts within the Hawkes process framework. By using a model selection scheme we show that our procedure can be used to detect intensity bursts when both their occurrence time and their total number is unknown. Moreover, the initial time of the burst can be determined with a precision given by the typical interevent time. We apply our methodology to the midprice change in foreign exchange (FX) markets showing that these bursts are frequent and that only a relatively small fraction is associated with news arrival. We show lead-lag relations in intensity burst occurrence across different FX rates and we discuss their relation with price jumps.
Saucedo-Espinosa, Mario A.; Lapizco-Encinas, Blanca H.
2016-01-01
Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813
Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation
NASA Astrophysics Data System (ADS)
Nakata, Robert
Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.
2007-06-01
of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation
Development of economic consequence methodology for process risk analysis.
Zadakbar, Omid; Khan, Faisal; Imtiaz, Syed
2015-04-01
A comprehensive methodology for economic consequence analysis with appropriate models for risk analysis of process systems is proposed. This methodology uses loss functions to relate process deviations in a given scenario to economic losses. It consists of four steps: definition of a scenario, identification of losses, quantification of losses, and integration of losses. In this methodology, the process deviations that contribute to a given accident scenario are identified and mapped to assess potential consequences. Losses are assessed with an appropriate loss function (revised Taguchi, modified inverted normal) for each type of loss. The total loss is quantified by integrating different loss functions. The proposed methodology has been examined on two industrial case studies. Implementation of this new economic consequence methodology in quantitative risk assessment will provide better understanding and quantification of risk. This will improve design, decision making, and risk management strategies. © 2014 Society for Risk Analysis.
Reliability Centered Maintenance - Methodologies
NASA Technical Reports Server (NTRS)
Kammerer, Catherine C.
2009-01-01
Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M
2009-01-01
Plasma optical spectroscopy is widely employed in on-line welding diagnostics. The determination of the plasma electron temperature, which is typically selected as the output monitoring parameter, implies the identification of the atomic emission lines. As a consequence, additional processing stages are required with a direct impact on the real time performance of the technique. The line-to-continuum method is a feasible alternative spectroscopic approach and it is particularly interesting in terms of its computational efficiency. However, the monitoring signal highly depends on the chosen emission line. In this paper, a feature selection methodology is proposed to solve the uncertainty regarding the selection of the optimum spectral band, which allows the employment of the line-to-continuum method for on-line welding diagnostics. Field test results have been conducted to demonstrate the feasibility of the solution.
Erythrocyte swelling and membrane hole formation in hypotonic media as studied by conductometry.
Pribush, A; Meyerstein, D; Hatskelzon, L; Kozlov, V; Levi, I; Meyerstein, N
2013-02-01
Hypoosmotic swelling of erythrocytes and the formation of membrane holes were studied by measuring the dc conductance (G). In accordance with the theoretical predictions, these processes are manifested by a decrease in G followed by its increase. Thus, unlike the conventional osmotic fragility test, the proposed methodological approach allows investigations of both the kinetics of swelling and the erythrocyte fragility. It is shown that the initial rate of swelling and the equilibrium size of the cells are affected by the tonicity of a hypotonic solution and the membrane rheological properties. Because the rupture of biological membranes is a stochastic process, a time-dependent increase in the conductance follows an integral distribution function of the membrane lifetime. The main conclusion which stems from reported results is that information about rheological properties of red blood cell (RBC) membranes and the resistivity of RBCs to a certain osmotic shock may be extracted from conductance signals.
Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.
2018-02-01
Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.
Surface roughness model based on force sensors for the prediction of the tool wear.
de Agustina, Beatriz; Rubio, Eva María; Sebastián, Miguel Ángel
2014-04-04
In this study, a methodology has been developed with the objective of evaluating the surface roughness obtained during turning processes by measuring the signals detected by a force sensor under the same cutting conditions. In this way, the surface quality achieved along the process is correlated to several parameters of the cutting forces (thrust forces, feed forces and cutting forces), so the effect that the tool wear causes on the surface roughness is evaluated. In a first step, the best cutting conditions (cutting parameters and radius of tool) for a certain quality surface requirement were found for pieces of UNS A97075. Next, with this selection a model of surface roughness based on the cutting forces was developed for different states of wear that simulate the behaviour of the tool throughout its life. The validation of this model reveals that it was effective for approximately 70% of the surface roughness values obtained.
Information technology security system engineering methodology
NASA Technical Reports Server (NTRS)
Childs, D.
2003-01-01
A methodology is described for system engineering security into large information technology systems under development. The methodology is an integration of a risk management process and a generic system development life cycle process. The methodology is to be used by Security System Engineers to effectively engineer and integrate information technology security into a target system as it progresses through the development life cycle. The methodology can also be used to re-engineer security into a legacy system.
A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series
NASA Astrophysics Data System (ADS)
Rovira, F.; Palau, J. L.; Millán, M.
2009-09-01
Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).
Bering, Luis; Paulussen, Felix M; Antonchick, Andrey P
2018-04-06
The nitrosonium ion-catalyzed dehydrogenative coupling of heteroarenes under mild reaction conditions is reported. The developed method utilizes ambient molecular oxygen as a terminal oxidant, and only water is produced as byproduct. Dehydrogenative coupling of heteroarenes translated into the rapid discovery of novel hedgehog signaling pathway inhibitors, emphasizing the importance of the developed methodology.
Global Profiling of Reactive Oxygen and Nitrogen Species in Biological Systems
Zielonka, Jacek; Zielonka, Monika; Sikora, Adam; Adamus, Jan; Joseph, Joy; Hardy, Micael; Ouari, Olivier; Dranka, Brian P.; Kalyanaraman, Balaraman
2012-01-01
Herein we describe a high-throughput fluorescence and HPLC-based methodology for global profiling of reactive oxygen and nitrogen species (ROS/RNS) in biological systems. The combined use of HPLC and fluorescence detection is key to successful implementation and validation of this methodology. Included here are methods to specifically detect and quantitate the products formed from interaction between the ROS/RNS species and the fluorogenic probes, as follows: superoxide using hydroethidine, peroxynitrite using boronate-based probes, nitric oxide-derived nitrosating species with 4,5-diaminofluorescein, and hydrogen peroxide and other oxidants using 10-acetyl-3,7-dihydroxyphenoxazine (Amplex® Red) with and without horseradish peroxidase, respectively. In this study, we demonstrate real-time monitoring of ROS/RNS in activated macrophages using high-throughput fluorescence and HPLC methods. This global profiling approach, simultaneous detection of multiple ROS/RNS products of fluorescent probes, developed in this study will be useful in unraveling the complex role of ROS/RNS in redox regulation, cell signaling, and cellular oxidative processes and in high-throughput screening of anti-inflammatory antioxidants. PMID:22139901
NASA Technical Reports Server (NTRS)
Farhat, Nabil H.
1987-01-01
Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.
NASA Astrophysics Data System (ADS)
Gromov, M. B.; Casentini, C.
2017-09-01
The detection of gravitational waves opens a new era in physics. Now it's possible to observe the Universe using a fundamentally new way. Gravitational waves potentially permit getting insight into the physics of Core-Collapse Supernovae (CCSNe). However, due to significant uncertainties on the theoretical models of gravitational wave emission associated with CCSNe, benefits may come from multi-messenger observations of CCSNe. Such benefits include increased confidence in detection, extending the astrophysical reach of the detectors and allowing deeper understanding of the nature of the phenomenon. Fortunately, CCSNe have a neutrino signature confirmed by the observation of SN1987A. The gravitational and neutrino signals propagate with the speed of light and without significant interaction with interstellar matter. So that they must reach an observer on the Earth almost simultaneously. These facts open a way to search for the correlation between the signals. However, this method is limited by the sensitivity of modern neutrino detectors that allow to observe CCSNe only in the Local Group of galaxies. The methodology and status of a proposed joint search for the correlation signals are presented here.
NASA Astrophysics Data System (ADS)
Gromov, M. B.; Casentini, C.
2017-09-01
The detection of gravitational waves opens a new era in physics. Now it's possible to observe the Universe using a fundamentally new way. Gravitational waves potentially permit getting insight into the physics of Core-Collapse Supernovae (CCSNe). However, due to signi cant uncertainties on the theoretical models of gravitational wave emission associated with CCSNe, bene ts may come from multi-messenger observations of CCSNe. Such bene ts include increased con dence in detection, extending the astrophysical reach of the detectors and allowing deeper understanding of the nature of the phenomenon. Fortunately, CCSNe have a neutrino signature con rmed by the observation of SN1987A. The gravitational and neutrino signals propagate with the speed of light and without signi cant interaction with interstellar matter. So that they must reach an observer on the Earth almost simultaneously. These facts open a way to search for the correlation between the signals. However, this method is limited by the sensitivity of modern neutrino detectors that allow to observe CCSNe only in the Local Group of galaxies. The methodology and status of a proposed joint search for the correlation signals are presented here.
SpaceCube v2.0 Space Flight Hybrid Reconfigurable Data Processing System
NASA Technical Reports Server (NTRS)
Petrick, Dave
2014-01-01
This paper details the design architecture, design methodology, and the advantages of the SpaceCube v2.0 high performance data processing system for space applications. The purpose in building the SpaceCube v2.0 system is to create a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. The SpaceCube v2.0 system leverages seven years of board design, avionics systems design, and space flight application experiences. This paper shows how SpaceCube v2.0 solves the increasing computing demands of space data processing applications that cannot be attained with a standalone processor approach.The main objective during the design stage is to find a good system balance between power, size, reliability, cost, and data processing capability. These design variables directly impact each other, and it is important to understand how to achieve a suitable balance. This paper will detail how these critical design factors were managed including the construction of an Engineering Model for an experiment on the International Space Station to test out design concepts. We will describe the designs for the processor card, power card, backplane, and a mission unique interface card. The mechanical design for the box will also be detailed since it is critical in meeting the stringent thermal and structural requirements imposed by the processing system. In addition, the mechanical design uses advanced thermal conduction techniques to solve the internal thermal challenges.The SpaceCube v2.0 processing system is based on an extended version of the 3U cPCI standard form factor where each card is 190mm x 100mm in size The typical power draw of the processor card is 8 to 10W and scales with application complexity. The SpaceCube v2.0 data processing card features two Xilinx Virtex-5 QV Field Programmable Gate Arrays (FPGA), eight memory modules, a monitor FPGA with analog monitoring, Ethernet, configurable interconnect to the Xilinx FPGAs including gigabit transceivers, and the necessary voltage regulation. The processor board uses a back-to-back design methodology for common parts that maximizes the board real estate available. This paper will show how to meet the IPC 6012B Class 3A standard with a 22-layer board that has two column grid array devices with 1.0mm pitch. All layout trades such as stack-up options, via selection, and FPGA signal breakout will be discussed with feature size results. The overall board design process will be discussed including parts selection, circuit design, proper signal termination, layout placement and route planning, signal integrity design and verification, and power integrity results. The radiation mitigation techniques will also be detailed including configuration scrubbing options, Xilinx circuit mitigation and FPGA functional monitoring, and memory protection.Finally, this paper will describe how this system is being used to solve the extreme challenges of a robotic satellite servicing mission where typical space-rated processors are not sufficient enough to meet the intensive data processing requirements. The SpaceCube v2.0 is the main payload control computer and is required to control critical subsystems such as autonomous rendezvous and docking using a suite of vision sensors and object avoidance when controlling two robotic arms.
Hwang, Seung Hwan; Kwon, Shin Hwa; Wang, Zhiqiang; Kim, Tae Hyun; Kang, Young-Hee; Lee, Jae-Yong; Lim, Soon Sung
2016-08-26
Protein tyrosine phosphatase expressed in insulin-sensitive tissues (such as liver, muscle, and adipose tissue) has a key role in the regulation of insulin signaling and pathway activation, making protein tyrosine phosphatase a promising target for the treatment of type 2 diabetes mellitus and obesity and response surface methodology (RSM) is an effective statistical technique for optimizing complex processes using a multi-variant approach. In this study, Zea mays L. (Purple corn kernel, PCK) and its constituents were investigated for protein tyrosine phosphatase 1β (PTP1β) inhibitory activity including enzyme kinetic study and to improve total yields of anthocyanins and polyphenols, four extraction parameters, including temperature, time, solid-liquid ratio, and solvent volume, were optimized by RSM. Isolation of seven polyphenols and five anthocyanins was achieved by PTP1β assay. Among them, cyanidin-3-(6"malonylglucoside) and 3'-methoxyhirsutrin showed the highest PTP1β inhibition with IC50 values of 54.06 and 64.04 μM, respectively and 4.52 mg gallic acid equivalent/g (GAE/g) of total polyphenol content (TPC) and 43.02 mg cyanidin-3-glucoside equivalent/100 g (C3GE/100g) of total anthocyanin content (TAC) were extracted at 40 °C for 8 h with a 33 % solid-liquid ratio and a 1:15 solvent volume. Yields were similar to predictions of 4.58 mg GAE/g of TPC and 42.28 mg C3GE/100 g of TAC. These results indicated that PCK and 3'-methoxyhirsutrin and cyanidin-3-(6"malonylglucoside) might be active natural compounds and could be apply by optimizing of extraction process using response surface methodology.
Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia
Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.
2015-01-01
This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553
Granados-Lieberman, David; Valtierra-Rodriguez, Martin; Morales-Hernandez, Luis A.; Romero-Troncoso, Rene J.; Osornio-Rios, Roque A.
2013-01-01
Power quality disturbance (PQD) monitoring has become an important issue due to the growing number of disturbing loads connected to the power line and to the susceptibility of certain loads to their presence. In any real power system, there are multiple sources of several disturbances which can have different magnitudes and appear at different times. In order to avoid equipment damage and estimate the damage severity, they have to be detected, classified, and quantified. In this work, a smart sensor for detection, classification, and quantification of PQD is proposed. First, the Hilbert transform (HT) is used as detection technique; then, the classification of the envelope of a PQD obtained through HT is carried out by a feed forward neural network (FFNN). Finally, the root mean square voltage (Vrms), peak voltage (Vpeak), crest factor (CF), and total harmonic distortion (THD) indices calculated through HT and Parseval's theorem as well as an instantaneous exponential time constant quantify the PQD according to the disturbance presented. The aforementioned methodology is processed online using digital hardware signal processing based on field programmable gate array (FPGA). Besides, the proposed smart sensor performance is validated and tested through synthetic signals and under real operating conditions, respectively. PMID:23698264
Airborne gamma-ray spectra processing: Extracting photopeaks.
Druker, Eugene
2018-07-01
The acquisition of information from the airborne gamma-ray spectra is based on the ability to evaluate photopeak areas in regular spectra from natural and other sources. In airborne gamma-ray spectrometry, extraction of photopeaks of radionuclides from regular one-second spectra is a complex problem. In the region of higher energies, difficulties are associated with low signal level, i.e. low count rates, whereas at lower energies difficulties are associated with high noises due to a high signal level. In this article, a new procedure is proposed for processing the measured spectra up to and including the extraction of evident photopeaks. The procedure consists of reducing the noise in the energy channels along the flight lines, transforming the spectra into the spectra of equal resolution, removing the background from each spectrum, sharpening the details, and transforming the spectra back to the original energy scale. The resulting spectra are better suited for examining and using the photopeaks. No assumptions are required regarding the number, locations, and magnitudes of photopeaks. The procedure does not generate negative photopeaks. The resolution of the spectrometer is used for the purpose. The proposed methodology, apparently, will contribute also to study environmental problems, soil characterization, and other near-surface geophysical methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Data collection and analysis strategies for phMRI.
Mandeville, Joseph B; Liu, Christina H; Vanduffel, Wim; Marota, John J A; Jenkins, Bruce G
2014-09-01
Although functional MRI traditionally has been applied mainly to study changes in task-induced brain function, evolving acquisition methodologies and improved knowledge of signal mechanisms have increased the utility of this method for studying responses to pharmacological stimuli, a technique often dubbed "phMRI". The proliferation of higher magnetic field strengths and the use of exogenous contrast agent have boosted detection power, a critical factor for successful phMRI due to the restricted ability to average multiple stimuli within subjects. Receptor-based models of neurovascular coupling, including explicit pharmacological models incorporating receptor densities and affinities and data-driven models that incorporate weak biophysical constraints, have demonstrated compelling descriptions of phMRI signal induced by dopaminergic stimuli. This report describes phMRI acquisition and analysis methodologies, with an emphasis on data-driven analyses. As an example application, statistically efficient data-driven regressors were used to describe the biphasic response to the mu-opioid agonist remifentanil, and antagonism using dopaminergic and GABAergic ligands revealed modulation of the mesolimbic pathway. Results illustrate the power of phMRI as well as our incomplete understanding of mechanisms underlying the signal. Future directions are discussed for phMRI acquisitions in human studies, for evolving analysis methodologies, and for interpretative studies using the new generation of simultaneous PET/MRI scanners. This article is part of the Special Issue Section entitled 'Neuroimaging in Neuropharmacology'. Copyright © 2014 Elsevier Ltd. All rights reserved.
Diagnostic methodology for incipient system disturbance based on a neural wavelet approach
NASA Astrophysics Data System (ADS)
Won, In-Ho
Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.
NGS' GRAV-D Project Brings Advances in Aerogravimetry
NASA Astrophysics Data System (ADS)
Childers, V. A.; Preaux, S. A.; Diehl, T. M.; Li, X.; Weil, C.
2011-12-01
NOAA's National Geodetic Survey has undertaken an extensive airborne gravity campaign to help replace the nation's vertical datum by 2022. After receiving Congressional funding in FY10 &11, the GRAV-D project has now surveyed 13.45% of the total area (as of abstract submittal time). The survey has now worked on a number of aircraft, both jets and turboprops. Early work was performed at 35,000 ft and 280 kts. Since summer of 2009, the survey altitude has been lowered to 20,000 ft to enhance signal recovery and to reduce the amplitude enhancement of noise in the downward continuation needed for gravity field blending. The high altitude and speed of the survey has forced a re-evaluation of all aspects of the airborne gravity processing methodology. This presentation will update the community on the progress of the project, summarize the various processing improvements implemented, and discuss the magnitude of their effects. Improvements and research include: a new in-house gravity processing software package called "Newton", kinematic GPS processing variables and their impacts on final gravity products, and evaluation of gravimeter off-level corrections, among other topics.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
Prabhakar, Ramachandran
2012-01-01
Source to surface distance (SSD) plays a very important role in external beam radiotherapy treatment verification. In this study, a simple technique has been developed to verify the SSD automatically with lasers. The study also suggests a methodology for determining the respiratory signal with lasers. Two lasers, red and green are mounted on the collimator head of a Clinac 2300 C/D linac along with a camera to determine the SSD. A software (SSDLas) was developed to estimate the SSD automatically from the images captured by a 12-megapixel camera. To determine the SSD to a patient surface, the external body contour of the central axis transverse computed tomography (CT) cut is imported into the software. Another important aspect in radiotherapy is the generation of respiratory signal. The changes in the lasers separation as the patient breathes are converted to produce a respiratory signal. Multiple frames of laser images were acquired from the camera mounted on the collimator head and each frame was analyzed with SSDLas to generate the respiratory signal. The SSD as observed with the ODI on the machine and SSD measured by the SSDlas software was found to be within the tolerance limit. The methodology described for generating the respiratory signals will be useful for the treatment of mobile tumors such as lung, liver, breast, pancreas etc. The technique described for determining the SSD and the generation of respiratory signals using lasers is cost effective and simple to implement. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang
2010-12-01
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2016-01-01
Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
Stacked Autoencoders for Outlier Detection in Over-the-Horizon Radar Signals
Protopapadakis, Eftychios; Doulamis, Anastasios; Doulamis, Nikolaos; Dres, Dimitrios; Bimpas, Matthaios
2017-01-01
Detection of outliers in radar signals is a considerable challenge in maritime surveillance applications. High-Frequency Surface-Wave (HFSW) radars have attracted significant interest as potential tools for long-range target identification and outlier detection at over-the-horizon (OTH) distances. However, a number of disadvantages, such as their low spatial resolution and presence of clutter, have a negative impact on their accuracy. In this paper, we explore the applicability of deep learning techniques for detecting deviations from the norm in behavioral patterns of vessels (outliers) as they are tracked from an OTH radar. The proposed methodology exploits the nonlinear mapping capabilities of deep stacked autoencoders in combination with density-based clustering. A comparative experimental evaluation of the approach shows promising results in terms of the proposed methodology's performance. PMID:29312449
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aihara, Taketo; Fukuyama, Atsuhiko; Ikari, Tetsuo
2015-02-28
Three non-destructive methodologies, namely, surface photovoltage (SPV), photoluminescence, and piezoelectric photothermal (PPT) spectroscopies, were adopted to detect the thermal carrier escape from quantum well (QW) and radiative and non-radiative carrier recombinations, respectively, in strain-balanced InGaAs/GaAsP multiple-quantum-well (MQW)-inserted GaAs p-i-n solar cell structure samples. Although the optical absorbance signal intensity was proportional to the number of QW stack, the signal intensities of the SPV and PPT methods decreased at high number of stack. To explain the temperature dependency of these signal intensities, we proposed a model that considers the three carrier dynamics: the thermal escape from the QW, and the non-radiativemore » and radiative carrier recombinations within the QW. From the fitting procedures, it was estimated that the activation energies of the thermal escape ΔE{sub barr} and non-radiative recombination ΔE{sub NR} were 68 and 29 meV, respectively, for a 30-stacked MQW sample. The estimated ΔE{sub barr} value agreed well with the difference between the first electron subband and the top of the potential barrier in the conduction band. We found that ΔE{sub barr} remained constant at approximately 70 meV even with increasing QW stack number. However, the ΔE{sub NR} value monotonically increased with the increase in the number of stack. Since this implies that non-radiative recombination becomes improbable as the number of stack increases, we found that the radiative recombination probability for electrons photoexcited within the QW increased at a large number of QW stack. Additional processes of escaping and recapturing of carriers at neighboring QW were discussed. As a result, the combination of the three non-destructive methodologies provided us new insights for optimizing the MQW components to further improve the cell performance.« less
Identification of cytokine-specific sensory neural signals by decoding murine vagus nerve activity.
Zanos, Theodoros P; Silverman, Harold A; Levy, Todd; Tsaava, Tea; Battinelli, Emily; Lorraine, Peter W; Ashe, Jeffrey M; Chavan, Sangeeta S; Tracey, Kevin J; Bouton, Chad E
2018-05-22
The nervous system maintains physiological homeostasis through reflex pathways that modulate organ function. This process begins when changes in the internal milieu (e.g., blood pressure, temperature, or pH) activate visceral sensory neurons that transmit action potentials along the vagus nerve to the brainstem. IL-1β and TNF, inflammatory cytokines produced by immune cells during infection and injury, and other inflammatory mediators have been implicated in activating sensory action potentials in the vagus nerve. However, it remains unclear whether neural responses encode cytokine-specific information. Here we develop methods to isolate and decode specific neural signals to discriminate between two different cytokines. Nerve impulses recorded from the vagus nerve of mice exposed to IL-1β and TNF were sorted into groups based on their shape and amplitude, and their respective firing rates were computed. This revealed sensory neural groups responding specifically to TNF and IL-1β in a dose-dependent manner. These cytokine-mediated responses were subsequently decoded using a Naive Bayes algorithm that discriminated between no exposure and exposures to IL-1β and TNF (mean successful identification rate 82.9 ± 17.8%, chance level 33%). Recordings obtained in IL-1 receptor-KO mice were devoid of IL-1β-related signals but retained their responses to TNF. Genetic ablation of TRPV1 neurons attenuated the vagus neural signals mediated by IL-1β, and distal lidocaine nerve block attenuated all vagus neural signals recorded. The results obtained in this study using the methodological framework suggest that cytokine-specific information is present in sensory neural signals within the vagus nerve. Copyright © 2018 the Author(s). Published by PNAS.
Implications for bidirectional signaling between afferent nerves and urothelial cells-ICI-RS 2014.
Kanai, Anthony; Fry, Christopher; Ikeda, Youko; Kullmann, Florenta Aura; Parsons, Brian; Birder, Lori
2016-02-01
To present a synopsis of the presentations and discussions from Think Tank I, "Implications for afferent-urothelial bidirectional communication" of the 2014 International Consultation on Incontinence-Research Society (ICI-RS) meeting in Bristol, UK. The participants presented what is new, currently understood or still unknown on afferent-urothelial signaling mechanisms. New avenues of research and experimental methodologies that are or could be employed were presented and discussed. It is clear that afferent-urothelial interactions are integral to the regulation of normal bladder function and that its disruption can have detrimental consequences. The urothelium is capable of releasing numerous signaling factors that can affect sensory neurons innervating the suburothelium. However, the understanding of how factors released from urothelial cells and afferent nerve terminals regulate one another is incomplete. Utilization of techniques such as viruses that genetically encode Ca(2+) sensors, based on calmodulin and green fluorescent protein, has helped to address the cellular mechanisms involved. Additionally, the epithelial-neuronal interactions in the urethra may also play a significant role in lower urinary tract regulation and merit further investigation. The signaling capabilities of the urothelium and afferent nerves are well documented, yet how these signals are integrated to regulate bladder function is unclear. There is unquestionably a need for expanded methodologies to further our understanding of lower urinary tract sensory mechanisms and their contribution to various pathologies. © 2016 Wiley Periodicals, Inc.
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
Delakis, Ioannis; Wise, Robert; Morris, Lauren; Kulama, Eugenia
2015-11-01
The purpose of this work was to evaluate the contrast-detail performance of full field digital mammography (FFDM) systems using ideal (Hotelling) observer Signal-to-Noise Ratio (SNR) methodology and ascertain whether it can be considered an alternative to the conventional, automated analysis of CDMAM phantom images. Five FFDM units currently used in the national breast screening programme were evaluated, which differed with respect to age, detector, Automatic Exposure Control (AEC) and target/filter combination. Contrast-detail performance was analysed using CDMAM and ideal observer SNR methodology. The ideal observer SNR was calculated for input signal originating from gold discs of varying thicknesses and diameters, and then used to estimate the threshold gold thickness for each diameter as per CDMAM analysis. The variability of both methods and the dependence of CDMAM analysis on phantom manufacturing discrepancies also investigated. Results from both CDMAM and ideal observer methodologies were informative differentiators of FFDM systems' contrast-detail performance, displaying comparable patterns with respect to the FFDM systems' type and age. CDMAM results suggested higher threshold gold thickness values compared with the ideal observer methodology, especially for small-diameter details, which can be attributed to the behaviour of the CDMAM phantom used in this study. In addition, ideal observer methodology results showed lower variability than CDMAM results. The Ideal observer SNR methodology can provide a useful metric of the FFDM systems' contrast detail characteristics and could be considered a surrogate for conventional, automated analysis of CDMAM images. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
Cao, Ruofan; Naivar, Mark A; Wilder, Mark; Houston, Jessica P
2014-01-01
Fluorescence lifetime measurements provide information about the fluorescence relaxation, or intensity decay, of organic fluorophores, fluorescent proteins, and other inorganic molecules that fluoresce. The fluorescence lifetime is emerging in flow cytometry and is helpful in a variety of multiparametric, single cell measurements because it is not impacted by nonlinearity that can occur with fluorescence intensity measurements. Yet time-resolved cytometry systems rely on major hardware modifications making the methodology difficult to reproduce. The motivation of this work is, by taking advantage of the dynamic nature of flow cytometry sample detection and applying digital signal processing methods, to measure fluorescence lifetimes using an unmodified flow cytometer. We collect a new lifetime-dependent parameter, referred to herein as the fluorescence-pulse-delay (FPD), and prove it is a valid representation of the average fluorescence lifetime. To verify we generated cytometric pulses in simulation, with light emitting diode (LED) pulsation, and with true fluorescence measurements of cells and microspheres. Each pulse is digitized and used in algorithms to extract an average fluorescence lifetime inherent in the signal. A range of fluorescence lifetimes is measurable with this approach including standard organic fluorophore lifetimes (∼1 to 22 ns) as well as small, simulated shifts (0.1 ns) under standard conditions (reported herein). This contribution demonstrates how digital data acquisition and signal processing can reveal time-dependent information foreshadowing the exploitation of full waveform analysis for quantification of similar photo-physical events within single cells. © 2014 The Authors. Published by Wiley Periodicals, Inc. PMID:25274073
A process proof test for model concepts: Modelling the meso-scale
NASA Astrophysics Data System (ADS)
Hellebrand, Hugo; Müller, Christoph; Matgen, Patrick; Fenicia, Fabrizio; Savenije, Huub
In hydrological modelling the use of detailed soil data is sometimes troublesome, since often these data are hard to obtain and, if available at all, difficult to interpret and process in a way that makes them meaningful for the model at hand. Intuitively the understanding and mapping of dominant runoff processes in the soil show high potential for improving hydrological models. In this study a labour-intensive methodology to assess dominant runoff processes is simplified in such a way that detailed soil maps are no longer needed. Nonetheless, there is an ongoing debate on how to integrate this type of information in hydrological models. In this study, dominant runoff processes (DRP) are mapped for meso-scale basins using the permeability of the substratum, land use information and the slope in a GIS. During a field campaign the processes are validated and for each DRP assumptions are made concerning their water storage capacity. The latter is done by means of combining soil data obtained during the field campaign with soil data obtained from the literature. Second, several parsimoniously parameterized conceptual hydrological models are used that incorporate certain aspects of the DRP. The result of these models are compared with a benchmark model in which the soil is represented as only one lumped parameter to test the contribution of the DRP in hydrological models. The proposed methodology is tested for 15 meso-scale river basins located in Luxembourg. The main goal of this study is to investigate if integrating dominant runoff processes, which have high information content concerning soil characteristics, with hydrological models allows the improvement of simulation results models with a view to regionalization and predictions in ungauged basins. The regionalization procedure gave no clear results. The calibration procedure and the well-mixed discharge signal of the calibration basins are considered major causes for this and it made the deconvolution of discharge signals of meso-scale basins problematic. From the results it is also suggested that DRP could very well display some sort of uniqueness of place, which was not foreseen in the methods from which they were derived. Furthermore, a strong seasonal influence on model performance was observed, implying a seasonal dependence of the DRP. When comparing the performance between the DRP models and the benchmark model no real distinction was found. To improve the performance of the DRP models, which are used in this study and also for then use of conceptual models in general, there is a need for an improved identification of the mechanisms that cause the different dominant runoff processes at the meso-scale. To achieve this, more orthogonal data could be of use for a better conceptualization of the DRPs. Then, models concepts should be adapted accordingly.
NASA Astrophysics Data System (ADS)
Szuflitowska, B.; Orlowski, P.
2017-08-01
Automated detection system consists of two key steps: extraction of features from EEG signals and classification for detection of pathology activity. The EEG sequences were analyzed using Short-Time Fourier Transform and the classification was performed using Linear Discriminant Analysis. The accuracy of the technique was tested on three sets of EEG signals: epilepsy, healthy and Alzheimer's Disease. The classification error below 10% has been considered a success. The higher accuracy are obtained for new data of unknown classes than testing data. The methodology can be helpful in differentiation epilepsy seizure and disturbances in the EEG signal in Alzheimer's Disease.
Circadian Integration of Glutamatergic Signals by Little SAAS in Novel Suprachiasmatic Circuits
Atkins, Norman; Mitchell, Jennifer W.; Romanova, Elena V.; Morgan, Daniel J.; Cominski, Tara P.; Ecker, Jennifer L.; Pintar, John E.; Sweedler, Jonathan V.; Gillette, Martha U.
2010-01-01
Background Neuropeptides are critical integrative elements within the central circadian clock in the suprachiasmatic nucleus (SCN), where they mediate both cell-to-cell synchronization and phase adjustments that cause light entrainment. Forward peptidomics identified little SAAS, derived from the proSAAS prohormone, among novel SCN peptides, but its role in the SCN is poorly understood. Methodology/Principal Findings Little SAAS localization and co-expression with established SCN neuropeptides were evaluated by immunohistochemistry using highly specific antisera and stereological analysis. Functional context was assessed relative to c-FOS induction in light-stimulated animals and on neuronal circadian rhythms in glutamate-stimulated brain slices. We found that little SAAS-expressing neurons comprise the third most abundant neuropeptidergic class (16.4%) with unusual functional circuit contexts. Little SAAS is localized within the densely retinorecipient central SCN of both rat and mouse, but not the retinohypothalamic tract (RHT). Some little SAAS colocalizes with vasoactive intestinal polypeptide (VIP) or gastrin-releasing peptide (GRP), known mediators of light signals, but not arginine vasopressin (AVP). Nearly 50% of little SAAS neurons express c-FOS in response to light exposure in early night. Blockade of signals that relay light information, via NMDA receptors or VIP- and GRP-cognate receptors, has no effect on phase delays of circadian rhythms induced by little SAAS. Conclusions/Significance Little SAAS relays signals downstream of light/glutamatergic signaling from eye to SCN, and independent of VIP and GRP action. These findings suggest that little SAAS forms a third SCN neuropeptidergic system, processing light information and activating phase-shifts within novel circuits of the central circadian clock. PMID:20830308
Hamrin, Tova Hannegård; Radell, Peter J; Fläring, Urban; Berner, Jonas; Eksborg, Staffan
2017-12-28
The aim of the present study was to evaluate the performance of regional oxygen saturation (rSO 2 ) monitoring with near infrared spectroscopy (NIRS) during pediatric inter-hospital transports and to optimize processing of the electronically stored data. Cerebral (rSO 2 -C) and abdominal (rSO 2 -A) NIRS sensors were used during transport in air ambulance and connecting ground ambulance. Data were electronically stored by the monitor during transport, extracted and analyzed off-line after the transport. After removal of all zero and floor effect values, the Savitzky-Golay algorithm of data smoothing was applied on the NIRS-signal. The second order of smoothing polynomial was used and the optimal number of neighboring points for the smoothing procedure was evaluated. NIRS-data from 38 pediatric patients was examined. Reliability, defined as measurements without values of 0 or 15%, was acceptable during transport (> 90% of all measurements). There were, however, individual patients with < 90% reliable measurements during transport, while no patient was found to have < 90% reliable measurements in hospital. Satisfactory noise reduction of the signal, without distortion of the underlying information, was achieved when 20-50 neighbors ("window-size") were used. The use of NIRS for measuring rSO 2 in clinical studies during pediatric transport in ground and air-ambulance is feasible but hampered by unreliable values and signal interference. By applying the Savitzky-Golay algorithm, the signal-to-noise ratio was improved and enabled better post-hoc signal evaluation.
Multimodal Neuroelectric Interface Development
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Totah, Joseph (Technical Monitor)
2001-01-01
This project aims to improve performance of NASA missions by developing multimodal neuroelectric technologies for augmented human-system interaction. Neuroelectric technologies will add completely new modes of interaction that operate in parallel with keyboards, speech, or other manual controls, thereby increasing the bandwidth of human-system interaction. We recently demonstrated the feasibility of real-time electromyographic (EMG) pattern recognition for a direct neuroelectric human-computer interface. We recorded EMG signals from an elastic sleeve with dry electrodes, while a human subject performed a range of discrete gestures. A machine-teaming algorithm was trained to recognize the EMG patterns associated with the gestures and map them to control signals. Successful applications now include piloting two Class 4 aircraft simulations (F-15 and 757) and entering data with a "virtual" numeric keyboard. Current research focuses on on-line adaptation of EMG sensing and processing and recognition of continuous gestures. We are also extending this on-line pattern recognition methodology to electroencephalographic (EEG) signals. This will allow us to bypass muscle activity and draw control signals directly from the human brain. Our system can reliably detect P-rhythm (a periodic EEG signal from motor cortex in the 10 Hz range) with a lightweight headset containing saline-soaked sponge electrodes. The data show that EEG p-rhythm can be modulated by real and imaginary motions. Current research focuses on using biofeedback to train of human subjects to modulate EEG rhythms on demand, and to examine interactions of EEG-based control with EMG-based and manual control. Viewgraphs on these neuroelectric technologies are also included.
Methodology and method and apparatus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2011-01-01
Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.
In-situ roundness measurement and correction for pin journals in oscillating grinding machines
NASA Astrophysics Data System (ADS)
Yu, Hongxiang; Xu, Mengchen; Zhao, Jie
2015-01-01
In the mass production of vehicle-engine crankshafts, pin chasing grinding using oscillating grinding machines is a widely accepted method to achieve flexible and efficient performance. However, the eccentric movement of pin journals makes it difficult to develop an in-process roundness measurement scheme for the improvement of contour precision. Here, a new in-situ roundness measurement strategy is proposed with high scanning speed. The measuring mechanism is composed of a V-block and an adaptive telescopic support. The swing pattern of the telescopic support and the V-block is analysed for an equal angle-interval signal sampling. Hence roundness error signal is extracted in frequency domain using a small-signal model of the V-block roundness measurement method and the Fast Fourier Transformation. To implement the roundness data in the CNC coordinate system of an oscillating grinding machine, a transformation function is derived according to the motion model of pin chasing grinding methodology. Computer simulation reveals the relationship between the rotational position of the crankshaft component and the scanning angle of the displacement probe on the V-block, as well as the influence introduced by the rotation centre drift. Prototype investigation indicates the validity of the theoretical analysis and the feasibility of the new strategy.
Supervised normalization of microarrays
Mecham, Brigham H.; Nelson, Peter S.; Storey, John D.
2010-01-01
Motivation: A major challenge in utilizing microarray technologies to measure nucleic acid abundances is ‘normalization’, the goal of which is to separate biologically meaningful signal from other confounding sources of signal, often due to unavoidable technical factors. It is intuitively clear that true biological signal and confounding factors need to be simultaneously considered when performing normalization. However, the most popular normalization approaches do not utilize what is known about the study, both in terms of the biological variables of interest and the known technical factors in the study, such as batch or array processing date. Results: We show here that failing to include all study-specific biological and technical variables when performing normalization leads to biased downstream analyses. We propose a general normalization framework that fits a study-specific model employing every known variable that is relevant to the expression study. The proposed method is generally applicable to the full range of existing probe designs, as well as to both single-channel and dual-channel arrays. We show through real and simulated examples that the method has favorable operating characteristics in comparison to some of the most highly used normalization methods. Availability: An R package called snm implementing the methodology will be made available from Bioconductor (http://bioconductor.org). Contact: jstorey@princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20363728
The Wigner-Ville Transform, An Approach to Interpret GPR Data: Outlining a Rik Zone
NASA Astrophysics Data System (ADS)
Chavez, R. E.; Samano, M. A.; Camara, M. E.; Tejero, A.; Flores-Marquez, L. E.; Arango, C.; Velazco, V.
2006-12-01
In this investigation, a time-frequency analysis is performed, based in the decomposition of the GPR signal in high- and low-frequencies. This process is combined with a statistical approach to detect signal changes in time and position simultaneously. The spectral analysis is carried out through the Wigner-Ville distribution (WVD). A cross-correlation can be computed between the original signal and the time-frequency components to obtain structural anomalies in the GPR observations, and to perform a correlation with the available geology. An example of this methodology is presented, where a series of traces where analyzed from a GPR profile surveyed in an eastern area of Mexico City. This is a heavily urbanized region built on the bottom of an ancient lake. The sediments are poorly consolidated and the extraction water rate has increased the areas of subsidence. Nowadays, most of family homes and public buildings, mainly schools have started to suffer heavy damages. The geophysical study carried out in the area permitted to detect areas of high risk. The data analysis combined with previous geological studies, which included stratigraphic columns allowed to identify the geophysical characteristics of the area, which will allow to the authorities to plan the future development of the area.
Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz
2016-05-01
A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.
Bartolini, Desirée; Galli, Francesco
2016-04-15
Glutathione S-transferase P (GSTP), and possibly other members of the subfamily of cytosolic GSTs, are increasingly proposed to have roles far beyond the classical GSH-dependent enzymatic detoxification of electrophilic metabolites and xenobiotics. Emerging evidence suggests that these are essential components of the redox sensing and signaling platform of cells. GSTP monomers physically interact with cellular proteins, such as other cytosolic GSTs, signaling kinases and the membrane peroxidase peroxiredoxin 6. Other interactions reported in literature include that with regulatory proteins such as Fanconi anemia complementation group C protein, transglutaminase 2 and several members of the keratin family of genes. Transcription factors downstream of inflammatory and oxidative stress pathways, namely STAT3 and Nrf2, were recently identified to be further components of this interactome. Together these pieces of evidence suggest the existence of a regulatory biomolecular network in which GSTP represents a node of functional convergence and coordination of signaling and transcription proteins, namely the "GSTP interactome", associated with key cellular processes such as cell cycle regulation and the stress response. These aspects and the methodological approach to explore the cellular interactome(s) are discussed in this review paper. Copyright © 2016 Elsevier B.V. All rights reserved.
Cardiac phase detection in intravascular ultrasound images
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi
2008-03-01
Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.
Embedded Cohesive Elements (ECE) Approach to the Simulation of Spall Fracture Experiment
NASA Astrophysics Data System (ADS)
Bonora, Nicola; Esposito, Luca; Ruggiero, Andrew
2007-06-01
Discrepancies between the calculated and observed velocity vs time plot, relatively to the spall signal portion in terms of both signal amplitude and frequency, in numerical simulations of flyer plate impact test are usually shown. These are often ascribed either to material model or the numerical scheme used. Bonora et al. (2003 )[Bonora N., Ruggiero A. and Milella P.P., 2003, Fracture energy effect on spall signal, Proc. of 13^th APS SCCM03, Portland, USA] showed that, for ductile metals, these differences can be the imputed to the dissipation process during fracturing due to the viscous separation of spall fracture plane surfaces. In this work that concept has been further developed implementing an embedded cohesive elements (ECE) technology into FEM. The ECE method consists in embedding cohesive elements (normal and shear forces only) into standard isoparametric 2D or 3D FEM continuum elements. The cohesive elements remain silent and inactive until the continuum element fails. At failure, the continuum element is removed while the ECE becomes active until the separation energy is dissipated. Here, the methodology is presented and applied to simulate soft spall in ductile metals such as OHFC copper. Results of parametric study on mesh size and cohesive law shape effect are presented.
NASA Astrophysics Data System (ADS)
Puzi, A. Ahmad; Sidek, S. N.; Mat Rosly, H.; Daud, N.; Yusof, H. Md
2017-11-01
Spasticity is common symptom presented amongst people with sensorimotor disabilities. Imbalanced signals from the central nervous systems (CNS) which are composed of the brain and spinal cord to the muscles ultimately leading to the injury and death of motor neurons. In clinical practice, the therapist assesses muscle spasticity using a standard assessment tool like Modified Ashworth Scale (MAS), Modified Tardiue Scale (MTS) or Fugl-Meyer Assessment (FMA). This is done subjectively based on the experience and perception of the therapist subjected to the patient fatigue level and body posture. However, the inconsistency in the assessment is prevalent and could affect the efficacy of the rehabilitation process. Thus, the aim of this paper is to describe the methodology of data collection and the quantitative model of MAS developed to satisfy its description. Two subjects with MAS of 2 and 3 spasticity levels were involved in the clinical data measurement. Their level of spasticity was verified by expert therapist using current practice. Data collection was established using mechanical system equipped with data acquisition system and LABVIEW software. The procedure engaged repeated series of flexion of the affected arm that was moved against the platform using a lever mechanism and performed by the therapist. The data was then analyzed to investigate the characteristics of spasticity signal in correspondence to the MAS description. Experimental results revealed that the methodology used to quantify spasticity satisfied the MAS tool requirement according to the description. Therefore, the result is crucial and useful towards the development of formal spasticity quantification model.
Becerra-Luna, Brayans; Martínez-Memije, Raúl; Cartas-Rosado, Raúl; Infante-Vázquez, Oscar
To improve the identification of peaks and feet in photoplethysmographic (PPG) pulses deformed by myokinetic noise, through the implementation of a modified fingertip and applying adaptive filtering. PPG signals were recordedfrom 10 healthy volunteers using two photoplethysmography systems placed on the index finger of each hand. Recordings lasted three minutes andwere done as follows: during the first minute, both handswere at rest, and for the lasting two minutes only the left hand was allowed to make quasi-periodicmovementsin order to add myokinetic noise. Two methodologies were employed to process the signals off-line. One consisted on using an adaptive filter based onthe Least Mean Square (LMS) algorithm, and the other includeda preprocessing stage in addition to the same LMS filter. Both filtering methods were compared and the one with the lowest error was chosen to assess the improvement in the identification of peaks and feet from PPG pulses. Average percentage errorsobtained wereof 22.94% with the first filtering methodology, and 3.72% withthe second one. On identifying peaks and feet from PPG pulsesbefore filtering, error percentages obtained were of 24.26% and 48.39%, respectively, and once filtered error percentageslowered to 2.02% for peaks and 3.77% for feet. The attenuation of myokinetic noise in PPG pulses through LMS filtering, plusa preprocessing stage, allows increasingthe effectiveness onthe identification of peaks and feet from PPG pulses, which are of great importance for medical assessment. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Zerouali, Younes; Lina, Jean-Marc; Sekerovic, Zoran; Godbout, Jonathan; Dube, Jonathan; Jolicoeur, Pierre; Carrier, Julie
2014-01-01
Sleep spindles are a hallmark of NREM sleep. They result from a widespread thalamo-cortical loop and involve synchronous cortical networks that are still poorly understood. We investigated whether brain activity during spindles can be characterized by specific patterns of functional connectivity among cortical generators. For that purpose, we developed a wavelet-based approach aimed at imaging the synchronous oscillatory cortical networks from simultaneous MEG-EEG recordings. First, we detected spindles on the EEG and extracted the corresponding frequency-locked MEG activity under the form of an analytic ridge signal in the time-frequency plane (Zerouali et al., 2013). Secondly, we performed source reconstruction of the ridge signal within the Maximum Entropy on the Mean framework (Amblard et al., 2004), yielding a robust estimate of the cortical sources producing observed oscillations. Lastly, we quantified functional connectivity among cortical sources using phase-locking values. The main innovations of this methodology are (1) to reveal the dynamic behavior of functional networks resolved in the time-frequency plane and (2) to characterize functional connectivity among MEG sources through phase interactions. We showed, for the first time, that the switch from fast to slow oscillatory mode during sleep spindles is required for the emergence of specific patterns of connectivity. Moreover, we show that earlier synchrony during spindles was associated with mainly intra-hemispheric connectivity whereas later synchrony was associated with global long-range connectivity. We propose that our methodology can be a valuable tool for studying the connectivity underlying neural processes involving sleep spindles, such as memory, plasticity or aging. PMID:25389381
Acoustics based assessment of respiratory diseases using GMM classification.
Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J
2010-01-01
The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.
A negotiation methodology and its application to cogeneration planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S.M.; Liu, C.C.; Luu, S.
Power system planning has become a complex process in utilities today. This paper presents a methodology for integrated planning with multiple objectives. The methodology uses a graphical representation (Goal-Decision Network) to capture the planning knowledge. The planning process is viewed as a negotiation process that applies three negotiation operators to search for beneficial decisions in a GDN. Also, the negotiation framework is applied to the problem of planning for cogeneration interconnection. The simulation results are presented to illustrate the cogeneration planning process.
29 CFR 1926.64 - Process safety management of highly hazardous chemicals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... analysis methodology being used. (5) The employer shall establish a system to promptly address the team's... the decision as to the appropriate PHA methodology to use. All PHA methodologies are subject to... be developed in conjunction with the process hazard analysis in sufficient detail to support the...
29 CFR 1926.64 - Process safety management of highly hazardous chemicals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... analysis methodology being used. (5) The employer shall establish a system to promptly address the team's... the decision as to the appropriate PHA methodology to use. All PHA methodologies are subject to... be developed in conjunction with the process hazard analysis in sufficient detail to support the...
NASA Technical Reports Server (NTRS)
Blakelee, Richard
1999-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measurement Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/MSFC are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Bailey, J. C.; Pinto, O.; Athayde, A.; Renno, N.; Weidman, C. D.
2003-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was established in the state of Rondonia in western Brazil in 1999 through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of- arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the Internet. The network, which is still operational, was deployed to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite that was launched in November 1997. The measurements are also being used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-time series observations produced by this network will help establish a regional lightning climatological database, supplementing other databases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at the NASA/Marshall Space Flight Center have been applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The data will also be corrected for the network detection efficiency. The processing methodology and the results from the analysis of four years of network operations will be presented.
NASA Technical Reports Server (NTRS)
Blakeslee, Rich; Bailey, Jeff; Koshak, Bill
1999-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/ Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/Marshall Space Flight Center (MSFC) are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.
Electromagnetic spectrum management system
Seastrand, Douglas R.
2017-01-31
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process the unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.
Elad, Tal; Lee, Jin Hyung; Belkin, Shimshon; Gu, Man Bock
2008-01-01
Summary The coming of age of whole‐cell biosensors, combined with the continuing advances in array technologies, has prepared the ground for the next step in the evolution of both disciplines – the whole‐cell array. In the present review, we highlight the state‐of‐the‐art in the different disciplines essential for a functional bacterial array. These include the genetic engineering of the biological components, their immobilization in different polymers, technologies for live cell deposition and patterning on different types of solid surfaces, and cellular viability maintenance. Also reviewed are the types of signals emitted by the reporter cell arrays, some of the transduction methodologies for reading these signals and the mathematical approaches proposed for their analysis. Finally, we review some of the potential applications for bacterial cell arrays, and list the future needs for their maturation: a richer arsenal of high‐performance reporter strains, better methodologies for their incorporation into hardware platforms, design of appropriate detection circuits, the continuing development of dedicated algorithms for multiplex signal analysis and – most importantly – enhanced long‐term maintenance of viability and activity on the fabricated biochips. PMID:21261831
A low power biomedical signal processor ASIC based on hardware software codesign.
Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T
2009-01-01
A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.
Normothermic Mouse Functional MRI of Acute Focal Thermostimulation for Probing Nociception
Reimann, Henning Matthias; Hentschel, Jan; Marek, Jaroslav; Huelnhagen, Till; Todiras, Mihail; Kox, Stefanie; Waiczies, Sonia; Hodge, Russ; Bader, Michael; Pohlmann, Andreas; Niendorf, Thoralf
2016-01-01
Combining mouse genomics and functional magnetic resonance imaging (fMRI) provides a promising tool to unravel the molecular mechanisms of chronic pain. Probing murine nociception via the blood oxygenation level-dependent (BOLD) effect is still challenging due to methodological constraints. Here we report on the reproducible application of acute noxious heat stimuli to examine the feasibility and limitations of functional brain mapping for central pain processing in mice. Recent technical and procedural advances were applied for enhanced BOLD signal detection and a tight control of physiological parameters. The latter includes the development of a novel mouse cradle designed to maintain whole-body normothermia in anesthetized mice during fMRI in a way that reflects the thermal status of awake, resting mice. Applying mild noxious heat stimuli to wildtype mice resulted in highly significant BOLD patterns in anatomical brain structures forming the pain matrix, which comprise temporal signal intensity changes of up to 6% magnitude. We also observed sub-threshold correlation patterns in large areas of the brain, as well as alterations in mean arterial blood pressure (MABP) in response to the applied stimulus. PMID:26821826
Normothermic Mouse Functional MRI of Acute Focal Thermostimulation for Probing Nociception
NASA Astrophysics Data System (ADS)
Reimann, Henning Matthias; Hentschel, Jan; Marek, Jaroslav; Huelnhagen, Till; Todiras, Mihail; Kox, Stefanie; Waiczies, Sonia; Hodge, Russ; Bader, Michael; Pohlmann, Andreas; Niendorf, Thoralf
2016-01-01
Combining mouse genomics and functional magnetic resonance imaging (fMRI) provides a promising tool to unravel the molecular mechanisms of chronic pain. Probing murine nociception via the blood oxygenation level-dependent (BOLD) effect is still challenging due to methodological constraints. Here we report on the reproducible application of acute noxious heat stimuli to examine the feasibility and limitations of functional brain mapping for central pain processing in mice. Recent technical and procedural advances were applied for enhanced BOLD signal detection and a tight control of physiological parameters. The latter includes the development of a novel mouse cradle designed to maintain whole-body normothermia in anesthetized mice during fMRI in a way that reflects the thermal status of awake, resting mice. Applying mild noxious heat stimuli to wildtype mice resulted in highly significant BOLD patterns in anatomical brain structures forming the pain matrix, which comprise temporal signal intensity changes of up to 6% magnitude. We also observed sub-threshold correlation patterns in large areas of the brain, as well as alterations in mean arterial blood pressure (MABP) in response to the applied stimulus.
Magnetic field feature extraction and selection for indoor location estimation.
Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F
2014-06-20
User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.
NASA Astrophysics Data System (ADS)
Hu, Chongqing; Li, Aihua; Zhao, Xingyang
2011-02-01
This paper proposes a multivariate statistical analysis approach to processing the instantaneous engine speed signal for the purpose of locating multiple misfire events in internal combustion engines. The state of each cylinder is described with a characteristic vector extracted from the instantaneous engine speed signal following a three-step procedure. These characteristic vectors are considered as the values of various procedure parameters of an engine cycle. Therefore, determination of occurrence of misfire events and identification of misfiring cylinders can be accomplished by a principal component analysis (PCA) based pattern recognition methodology. The proposed algorithm can be implemented easily in practice because the threshold can be defined adaptively without the information of operating conditions. Besides, the effect of torsional vibration on the engine speed waveform is interpreted as the presence of super powerful cylinder, which is also isolated by the algorithm. The misfiring cylinder and the super powerful cylinder are often adjacent in the firing sequence, thus missing detections and false alarms can be avoided effectively by checking the relationship between the cylinders.
High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers
Linderman, Michael D.; Athalye, Vivek; Meng, Teresa H.; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P.
2017-01-01
Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20–50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations. PMID:28819655
Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives
NASA Astrophysics Data System (ADS)
Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri
2016-02-01
The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings.
An Overview of GNSS Remote Sensing
2014-08-27
dedicated space missions and develop- ments of new algorithms and innovative methodologies. Atmospheric sounding is a new area of GNSS applications based on...the analysis of radio signals from GNSS satel - lites, which are refracted as they pass through the atmos- phere and can give information on its...under a full COSMIC constellation. The L-band radio signals broadcast by the GNSS satel - lites are affected by both the ionospheric and tropospheric
NASA Astrophysics Data System (ADS)
Eppelbaum, Lev
2015-04-01
Geophysical methods are prompt, non-invasive and low-cost tool for quantitative delineation of buried archaeological targets. However, taking into account the complexity of geological-archaeological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient (Khesin and Eppelbaum, 1997). Besides this, it is well-known that the majority of inverse-problem solutions in geophysics are ill-posed (e.g., Zhdanov, 2002), which means, according to Hadamard (1902), that the solution does not exist, or is not unique, or is not a continuous function of observed geophysical data (when small perturbations in the observations will cause arbitrary mistakes in the solution). This fact has a wide application for informational, probabilistic and wavelet methodologies in archaeological geophysics (Eppelbaum, 2014a). The goal of the modern geophysical data examination is to detect the geophysical signatures of buried targets at noisy areas via the analysis of some physical parameters with a minimal number of false alarms and miss-detections (Eppelbaum et al., 2011; Eppelbaum, 2014b). The proposed wavelet approach to recognition of archaeological targets (AT) by the examination of geophysical method integration consists of advanced processing of each geophysical method and nonconventional integration of different geophysical methods between themselves. The recently developed technique of diffusion clustering combined with the abovementioned wavelet methods was utilized to integrate the geophysical data and detect existing irregularities. The approach is based on the wavelet packet techniques applied as to the geophysical images (or graphs) versus coordinates. For such an analysis may be utilized practically all geophysical methods (magnetic, gravity, seismic, GPR, ERT, self-potential, etc.). On the first stage of the proposed investigation a few tens of typical physical-archaeological models (PAM) (e.g., Eppelbaum et al., 2010; Eppelbaum, 2011) of the targets under study for the concrete area (region) are developed. These PAM are composed on the basis of the known archaeological and geological data, results of previous archaeogeophysical investigations and 3D modeling of geophysical data. It should be underlined that the PAMs must differ (by depth, size, shape and physical properties of AT as well as peculiarities of the host archaeological-geological media). The PAMs must include also noise components of different orders (corresponding to the archaeogeophysical conditions of the area under study). The same models are computed and without the AT. Introducing complex PAMs (for example, situated in the vicinity of electric power lines, some objects of infrastructure, etc. (Eppelbaum et al., 2001)) will reflect some real class of AT occurring in such unfavorable for geophysical searching conditions. Anomalous effects from such complex PAMs will significantly disturb the geophysical anomalies from AT and impede the wavelet methodology employment. At the same time, the 'self-learning' procedure laid in this methodology will help further to recognize the AT even in the cases of unfavorable S/N ratio. Modern developments in the wavelet theory and data mining are utilized for the analysis of the integrated data. Wavelet approach is applied for derivation of enhanced (e.g., coherence portraits) and combined images of geophysical fields. The modern methodologies based on the matching pursuit with wavelet packet dictionaries enables to extract desired signals even from strongly noised data (Averbuch et al., 2014). Researchers usually met the problem of extraction of essential features from available data contaminated by a random noise and by a non-relevant background (Averbuch et al., 2014). If the essential structure of a signal consists of several sine waves then we may represent it via trigonometric basis (Fourier analysis). In this case one can compare the signal with a set of sinusoids and extract consistent ones. An indicator of presence a wave in a signal f(t) is the Fourier coefficient ∫ f(t) sinwt dt. Wavelet analysis provides a rich library of waveforms available and fast, computationally efficient procedures of representation of signals and of selection of relevant waveforms. The basic assumption justifying an application of wavelet analysis is that the essential structure of a signal analyzed consists of not a large number of various waveforms. The best way to reveal this structure is representation of the signal by a set of basic elements containing waveforms coherent to the signal. For structures of the signal coherent to the basis, large coefficients are attributed to a few basic waveforms, whereas we expect small coefficients for the noise and structures incoherent to all basic waveforms. Wavelets are a family of functions ranging from functions of arbitrary smoothness to fractal ones. Wavelet procedure involves two aspects. The first one is a decomposition, i.e. breaking up a signal to obtain the wavelet coefficients and the 2nd one is a reconstruction, which consists of a reassembling the signal from coefficients There are many modifications of the WA. Note, first of all, so-called Continuous WA in whichsignal f(t) is tested for presence of waveforms ψ(t-b) a. Here, a is scaling parameter (dilation), bdetermines location of the wavelet ψ(t-b) a in a signal f(t). The integral ( ) ∫ t-b (W ψf) (b,a) = f (t) ψ a dt is the Continuous Wavelet Transform.When parameters a,b in ψ( ) t-ab take some discrete values, we have the Discrete Wavelet Transform. A general scheme of the Wavelet Decomposition Tree is shown, for instance, in (Averbuch et al., 2014; Eppelbaum et al., 2014). The signal is compared with the testing signal on each scale. It is estimated wavelet coefficients which enable to reconstruct the 1st approximation of the signal and details. On the next level, wavelet transform is applied to the approximation. Then, we can present A1 as A2 + D2, etc. So, if S - Signal, A - Approximation, D - Details, then S = A1 + D1 = A2 + D2 + D1 = A3 + D3 + D2 + D1. Wavelet packet transform is applied to both low pass results (approximations) and high pass results (Details). For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters (Eppelbaum et al., 2012). This set of parameters serves as a signature of the image and is utilized for discrimination of images (a) containing AT from the images (b) non-containing AT (let will designate these images as N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions (Alperovich et al., 2013). As a result of the previous steps we obtain two sets: containing AT and N of the signatures vectors for geophysical images. The obtained 3D set of the data representatives can be used as a reference set for the classification of newly arriving geophysical data. The obtained data sets are reduced by embedding features vectors into the 3D Euclidean space using the so-called diffusion map. This map enables to reveal the internal structure of the datasets AT and N and to distinctly separate them. For this, a matrix of the diffusion distances for the combined feature matrix F = FN ∴ FC of size 60 x C is constructed (Coifman and Lafon, 2006; Averbuch et al., 2010). Then, each row of the matrices FN and FC is projected onto three first eigenvectors of the matrix D(F ). As a result, each data curve is represented by a 3D point in the Euclidean space formed by eigenvectors of D(F ). The Euclidean distances between these 3D points reflect the similarity of the data curves. The scattered projections of the data curves onto the diffusion eigenvectors will be composed. Finally we observe that as a result of the above operations we embedded the original data into 3-dimensional space where data related to the AT subsurface are well separated from the N data. This 3D set of the data representatives can be used as a reference set for the classification of newly arriving data. Geophysically it means a reliable division of the studied areas for the AT-containing and not containing (N) these objects. Testing this methodology for delineation of archaeological cavities by magnetic and gravity data analysis displayed an effectiveness of this approach. References Alperovich, L., Eppelbaum, L., Zheludev, V., Dumoulin, J., Soldovieri, F., Proto, M., Bavusi, M. and Loperte, A., 2013. A new combined wavelet methodology applied to GPR and ERT data in the Montagnole experiment (French Alps). Journal of Geophysics and Engineering, 10, No. 2, 025017, 1-17. Averbuch, A., Hochman, K., Rabin, N., Schclar, A. and Zheludev, V., 2010. A diffusion frame-work for detection of moving vehicles. Digital Signal Processing, 20, No.1, 111-122. Averbuch A.Z., Neittaanmäki, P., and Zheludev, V.A., 2014. Spline and Spline Wavelet Methods with Applications to Signal and Image Processing. Volume I: Periodic Splines. Springer. Coifman, R.R. and Lafon, S., 2006. Diffusion maps, Applied and Computational Harmonic Analysis. Special issue on Diffusion Maps and Wavelets, 21, No. 7, 5-30. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., 2014a. Geophysical observations at archaeological sites: Estimating informational content. Archaeological Prospection, 21, No. 2, 25-38. Eppelbaum, L.V. 2014b. Four Color Theorem and Applied Geophysics. Applied Mathematics, 5, 358-366. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed. of the 2011 SAGEEP Conference, Charleston, South Carolina, USA, 24, 24-60. Eppelbaum, L.V., Khesin, B.E. and Itkis, S.E., 2001. Prompt magnetic investigations of archaeological remains in areas of infrastructure development: Israeli experience. Archaeological Prospection, 8, No.3, 163-185. Eppelbaum, L.V., Khesin, B.E. and Itkis, S.E., 2010. Archaeological geophysics in arid environments: Examples from Israel. Journal of Arid Environments, 74, No. 7, 849-860. Eppelbaum, L.V., Zheludev, V. and Averbuch, A., 2014. Diffusion maps as a powerful tool for integrated geophysical field analysis to detecting hidden karst terranes. Izv. Acad. Sci. Azerb. Rep., Ser.: Earth Sciences, No. 1-2, 36-46. Hadamard, J., 1902. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton University Bulletin, 13, 49-52. Khesin, B.E. and Eppelbaum, L.V., 1997. The number of geophysical methods required for target classification: quantitative estimation. Geoinformatics, 8, No.1, 31-39. Zhdanov, M.S., 2002. Geophysical Inverse Theory and Regularization Problems. Methods in Geochemistry and Geophysics, Vol. 36. Elsevier, Amsterdam.
Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith
2014-01-01
We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.
Vilar, Santiago; Harpaz, Rave; Chase, Herbert S; Costanzi, Stefano; Rabadan, Raul
2011-01-01
Background Adverse drug events (ADE) cause considerable harm to patients, and consequently their detection is critical for patient safety. The US Food and Drug Administration maintains an adverse event reporting system (AERS) to facilitate the detection of ADE in drugs. Various data mining approaches have been developed that use AERS to detect signals identifying associations between drugs and ADE. The signals must then be monitored further by domain experts, which is a time-consuming task. Objective To develop a new methodology that combines existing data mining algorithms with chemical information by analysis of molecular fingerprints to enhance initial ADE signals generated from AERS, and to provide a decision support mechanism to facilitate the identification of novel adverse events. Results The method achieved a significant improvement in precision in identifying known ADE, and a more than twofold signal enhancement when applied to the ADE rhabdomyolysis. The simplicity of the method assists in highlighting the etiology of the ADE by identifying structurally similar drugs. A set of drugs with strong evidence from both AERS and molecular fingerprint-based modeling is constructed for further analysis. Conclusion The results demonstrate that the proposed methodology could be used as a pharmacovigilance decision support tool to facilitate ADE detection. PMID:21946238