Process dissociation and mixture signal detection theory.
DeCarlo, Lawrence T
2008-11-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.
Intelligent processing of acoustic emission signals
NASA Astrophysics Data System (ADS)
Sachse, Wolfgang; Grabec, Igor
1992-07-01
Recent developments in applying neural-like signal-processing procedures for analyzing acoustic emission signals are summarized. These procedures employ a set of learning signals to develop a memory that can subsequently be utilized to process other signals to recover information about an unknown source. A majority of the current applications to process ultrasonic waveforms are based on multilayered, feed-forward neural networks, trained with some type of back-error propagation rule.
Process Dissociation and Mixture Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2008-01-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…
[Cognitive aging mechanism of signaling effects on the memory for procedural sentences].
Yamamoto, Hiroki; Shimada, Hideaki
2006-08-01
The aim of this study was to clarify the cognitive aging mechanism of signaling effects on the memory for procedural sentences. Participants were 60 younger adults (college students) and 60 older adults. Both age groups were assigned into two groups; half of each group was presented with procedural sentences with signals that highlighted their top-level structure and the other half with procedural sentences without them. Both groups were requested to perform the sentence arrangement task and the reconstruction task. Each task was composed of procedural sentences with or without signals. Results indicated that signaling supported changes in strategy utilization during the successive organizational processes and that changes in strategy utilization resulting from signaling improved the memory for procedural sentences. Moreover, age-related factors interfered with these signaling effects. This study clarified the cognitive aging mechanism of signaling effects in which signaling supports changes in the strategy utilization during organizational processes at encoding and this mediation promotes memory for procedural sentences, though disuse of the strategy utilization due to aging restrains their memory for procedural sentences.
A continuous dual-process model of remember/know judgments.
Wixted, John T; Mickes, Laura
2010-10-01
The dual-process theory of recognition memory holds that recognition decisions can be based on recollection or familiarity, and the remember/know procedure is widely used to investigate those 2 processes. Dual-process theory in general and the remember/know procedure in particular have been challenged by an alternative strength-based interpretation based on signal-detection theory, which holds that remember judgments simply reflect stronger memories than do know judgments. Although supported by a considerable body of research, the signal-detection account is difficult to reconcile with G. Mandler's (1980) classic "butcher-on-the-bus" phenomenon (i.e., strong, familiarity-based recognition). In this article, a new signal-detection model is proposed that does not deny either the validity of dual-process theory or the possibility that remember/know judgments can-when used in the right way-help to distinguish between memories that are largely recollection based from those that are largely familiarity based. It does, however, agree with all prior signal-detection-based critiques of the remember/know procedure, which hold that, as it is ordinarily used, the procedure mainly distinguishes strong memories from weak memories (not recollection from familiarity).
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
The detection and analysis of point processes in biological signals
NASA Technical Reports Server (NTRS)
Anderson, D. J.; Correia, M. J.
1977-01-01
A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.
Test Operations Procedure (TOP) 5-2-521 Pyrotechnic Shock Test Procedures
2007-11-20
Clipping will produce a signal that resembles a square wave . (2) Filters are used to limit the frequency bandwidth of the signal . Low pass filters...video systems permit observation of explosive items under test. c. Facilities to perform non-destructive inspections such as x-ray, ultrasonic , magna...test. (1) Accelerometers (2) Signal Conditioners (3) Digital Recording System (4) Data Processing System with hardcopy output
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
Signal-Processing Algorithm Development for the ACLAIM Sensor
NASA Technical Reports Server (NTRS)
vonLaven, Scott
1995-01-01
Methods for further minimizing the risk by making use of previous lidar observations were investigated. EOFs are likely to play an important role in these methods, and a procedure for extracting EOFs from data has been implemented, The new processing methods involving EOFs could range from extrapolation, as discussed, to more complicated statistical procedures for maintaining low unstart risk.
NASA Astrophysics Data System (ADS)
Rumsewicz, Michael
1994-04-01
In this paper, we examine call completion performance, rather than message throughput, in a Common Channel Signaling network in which the processing resources, and not transmission resources, of a Signaling Transfer Point (STP) are overloaded. Specifically, we perform a transient analysis, via simulation, of a network consisting of a single Central Processor-based STP connecting many local exchanges. We consider the efficacy of using the Transfer Controlled (TFC) procedure when the network call attempt rate exceeds the processing capability of the STP. We find the following: (1) the success of the control depends critically on the rate at which TFC's are sent; (2) use of the TFC procedure in theevent of processor overload can provide reasonable call completion rates.
Inverse analysis of water profile in starch by non-contact photopyroelectric method
NASA Astrophysics Data System (ADS)
Frandas, A.; Duvaut, T.; Paris, D.
2000-07-01
The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
Statistical analysis and digital processing of the Mössbauer spectra
NASA Astrophysics Data System (ADS)
Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri
2010-02-01
This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.
A masked least-squares smoothing procedure for artifact reduction in scanning-EMG recordings.
Corera, Íñigo; Eciolaza, Adrián; Rubio, Oliver; Malanda, Armando; Rodríguez-Falces, Javier; Navallas, Javier
2018-01-11
Scanning-EMG is an electrophysiological technique in which the electrical activity of the motor unit is recorded at multiple points along a corridor crossing the motor unit territory. Correct analysis of the scanning-EMG signal requires prior elimination of interference from nearby motor units. Although the traditional processing based on the median filtering is effective in removing such interference, it distorts the physiological waveform of the scanning-EMG signal. In this study, we describe a new scanning-EMG signal processing algorithm that preserves the physiological signal waveform while effectively removing interference from other motor units. To obtain a cleaned-up version of the scanning signal, the masked least-squares smoothing (MLSS) algorithm recalculates and replaces each sample value of the signal using a least-squares smoothing in the spatial dimension, taking into account the information of only those samples that are not contaminated with activity of other motor units. The performance of the new algorithm with simulated scanning-EMG signals is studied and compared with the performance of the median algorithm and tested with real scanning signals. Results show that the MLSS algorithm distorts the waveform of the scanning-EMG signal much less than the median algorithm (approximately 3.5 dB gain), being at the same time very effective at removing interference components. Graphical Abstract The raw scanning-EMG signal (left figure) is processed by the MLSS algorithm in order to remove the artifact interference. Firstly, artifacts are detected from the raw signal, obtaining a validity mask (central figure) that determines the samples that have been contaminated by artifacts. Secondly, a least-squares smoothing procedure in the spatial dimension is applied to the raw signal using the not contaminated samples according to the validity mask. The resulting MLSS-processed scanning-EMG signal (right figure) is clean of artifact interference.
Dual-process theory and signal-detection theory of recognition memory.
Wixted, John T
2007-01-01
Two influential models of recognition memory, the unequal-variance signal-detection model and a dual-process threshold/detection model, accurately describe the receiver operating characteristic, but only the latter model can provide estimates of recollection and familiarity. Such estimates often accord with those provided by the remember-know procedure, and both methods are now widely used in the neuroscience literature to identify the brain correlates of recollection and familiarity. However, in recent years, a substantial literature has accumulated directly contrasting the signal-detection model against the threshold/detection model, and that literature is almost unanimous in its endorsement of signal-detection theory. A dual-process version of signal-detection theory implies that individual recognition decisions are not process pure, and it suggests new ways to investigate the brain correlates of recognition memory. ((c) 2007 APA, all rights reserved).
Pulse-dose radiofrequency treatment in pain management-initial experience.
Ojango, Christine; Raguso, Mario; Fiori, Roberto; Masala, Salvatore
2018-05-01
Radiofrequency procedures have been used for treating various chronic pain conditions for decades. These minimally invasive percutaneous treatments employ an alternating electrical current with oscillating radiofrequency wavelengths to eliminate or alter pain signals from the targeted site. The aim of the continuous radiofrequency procedure is to increase the temperature sufficiently to create an irreversible thermal lesion on nerve fibres and thus permanently interrupt pain signals. The pulsed radiofrequency procedure utilises short pulses of radiofrequency current with intervals of longer pauses to avert a temperature increase to the level of permanent tissue damage. The goal of these pulses is to alter the processing of pain signals, but to avoid relevant structural damage to nerve fibres, as seen in the continuous radiofrequency procedure. The pulse-dose radiofrequency procedure is a technical improvement of the pulsed radiofrequency technique in which the delivery mode of the current is adapted. During the pulse-dose radiofrequency procedure thermal damage is avoided. In addition, the amplitude and width of the consecutive pulses are kept the same. The method ensures that each delivered pulse keeps the same characteristics and therefore the dose is similar between patients. The current review outlines the pulse-dose radiofrequency procedure and presents our institution's chronic pain management studies.
Algebraic signal processing theory: 2-D spatial hexagonal lattice.
Pünschel, Markus; Rötteler, Martin
2007-06-01
We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
Automated smoother for the numerical decoupling of dynamics models.
Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S
2007-08-21
Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.
Zenner, Hans P; Pfister, Markus; Birbaumer, Niels
2006-12-01
Acquired centralized tinnitus (ACT) is the most frequent form of chronic tinnitus. The proposed ACT sensitization (ACTS) assumes a peripheral initiation of tinnitus whereby sensitizing signals from the auditory system establish new neuronal connections in the brain. Consequently, permanent neurophysiological malfunction within the information-processing modules results. Successful treatment has to target these malfunctioning information processing. We present in this study the neurophysiological and psychophysiological aspects of a recently suggested neurophysiological model, which may explain the symptoms caused by central cognitive tinnitus sensitization. Although conditioned reflexes, as a causal agent of chronic tinnitus, respond to extinction procedures, sensitization may initiate a vicious circle of overexcitation of the auditory system, resisting extinction and habituation. We used the literature database as indicated under "References" covering English and German works. For the ACTS model we extracted neurophysiological hypotheses of the auditory stimulus processing and the neuronal connections of the central auditory system with other brain regions to explain the malfunctions of auditory information processing. The model does not assume information-processing changes specific for tinnitus but treats the processing of tinnitus signals comparable with the processing of other external stimuli. The model uses the extensive knowledge available on sensitization of perception and memory processes and highlights the similarities of tinnitus with central neuropathic pain. Quality, validity, and comparability of the extracted data were evaluated by peer reviewing. Statistical techniques were not used. According to the tinnitus sensitization model, a tinnitus signal originates (as a type I-IV tinnitus) in the cochlea. In the brain, concerned with perception and cognition, the 1) conditioned associations, as postulated by the tinnitus model of Jastreboff, and the 2) unconditioned sensitized stimulus responses, as postulated in the present ACTS model, are actively connected with and attributed to the tinnitus signal. Attention to the tinnitus constitutes a typical undesired sensitized response. Some of the tinnitus-associated attributes may be called essential, unconditioned sensitization attributes. By a process called facilitation, the tinnitus' essential attributes are suggested to activate the tinnitus response. The result is an undesired increase in responsivity, such as an increase in attentional focus to the eliciting tinnitus stimulus. The mechanisms underlying sensitization are known as a specific nonassociative learning process producing a structural fixation of long-term facilitation at the synaptic level. This sensitization model may be important for the development of a sensitization-specific treatment if extinction procedures alone do not lead to satisfactory outcome. Inasmuch as this model considers sensitization as a nonassociative learning process based on cortical plasticity, it is reasonable to assume that this learning process can be altered by counteracting learning procedures. These counteracting learning procedures may consist of tinnitus-specific cognitive and behavioral procedures.
Automated Monitoring with a BSP Fault-Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L.; Herzog, James P.
2003-01-01
The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.
Parallel Processing with Digital Signal Processing Hardware and Software
NASA Technical Reports Server (NTRS)
Swenson, Cory V.
1995-01-01
The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.
Techniques of EMG signal analysis: detection, processing, classification and applications
Hussain, M.S.; Mohd-Yasin, F.
2006-01-01
Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694
Speed and Accuracy in the Processing of False Statements About Semantic Information.
ERIC Educational Resources Information Center
Ratcliff, Roger
1982-01-01
A standard reaction time procedure and a response signal procedure were used on data from eight experiments on semantic verifications. Results suggest that simple models of the semantic verification task that assume a single yes/no dimension on which discrimination is made are not correct. (Author/PN)
[Automated processing of electrophysiologic signals].
Korenevskiĭ, N A; Gubanov, V V
1995-01-01
The paper outlines a diagram of a multichannel analyzer of electrophysiological signals while are significantly non-stationary (such as those of electroencephalograms, myograms, etc.), by using a method based on the ranging procedure by the change-over points which may be the points of infection, impaired locality, minima, maxima, discontinuity, etc.
A cloud masking algorithm for EARLINET lidar systems
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina
2015-04-01
Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.
New method of control of tooth whitening
NASA Astrophysics Data System (ADS)
Angelov, I.; Mantareva, V.; Gisbrecht, A.; Valkanov, S.; Uzunov, Tz.
2010-10-01
New methods of control of tooth bleaching stages through simultaneous measurements of a reflected light and a fluorescence signal are proposed. It is shown that the bleaching process leads to significant changes in the intensity of a scattered signal and also in the shape and intensity of the fluorescence spectra. Experimental data illustrate that the bleaching process causes essential changes in the teeth discoloration in short time as 8-10 min from the beginning of the application procedure. The continuation of the treatment is not necessary moreover the probability of the enamel destroy increases considerably. The proposed optical back control of tooth surface is a base for development of a practical set up to control the duration of the bleaching procedure.
Instrumentation & Data Acquisition System (D AS) Engineer
NASA Technical Reports Server (NTRS)
Jackson, Markus Deon
2015-01-01
The primary job of an Instrumentation and Data Acquisition System (DAS) Engineer is to properly measure physical phenomenon of hardware using appropriate instrumentation and DAS equipment designed to record data during a specified test of the hardware. A DAS system includes a CPU or processor, a data storage device such as a hard drive, a data communication bus such as Universal Serial Bus, software to control the DAS system processes like calibrations, recording of data and processing of data. It also includes signal conditioning amplifiers, and certain sensors for specified measurements. My internship responsibilities have included testing and adjusting Pacific Instruments Model 9355 signal conditioning amplifiers, writing and performing checkout procedures, writing and performing calibration procedures while learning the basics of instrumentation.
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
siGnum: graphical user interface for EMG signal analysis.
Kaur, Manvinder; Mathur, Shilpi; Bhatia, Dinesh; Verma, Suresh
2015-01-01
Electromyography (EMG) signals that represent the electrical activity of muscles can be used for various clinical and biomedical applications. These are complicated and highly varying signals that are dependent on anatomical location and physiological properties of the muscles. EMG signals acquired from the muscles require advanced methods for detection, decomposition and processing. This paper proposes a novel Graphical User Interface (GUI) siGnum developed in MATLAB that will apply efficient and effective techniques on processing of the raw EMG signals and decompose it in a simpler manner. It could be used independent of MATLAB software by employing a deploy tool. This would enable researcher's to gain good understanding of EMG signal and its analysis procedures that can be utilized for more powerful, flexible and efficient applications in near future.
Shuttle payload S-band communications study
NASA Technical Reports Server (NTRS)
Springett, J. C.
1979-01-01
The work to identify, evaluate, and make recommendations concerning the functions and interfaces of those orbiter avionic subsystems which are dedicated to, or play some part in, handling communication signals (telemetry and command) to/from payloads (spacecraft) that will be carried into orbit by the shuttle is reported. Some principal directions of the research are: (1) analysis of the ability of the various avionic equipment to interface with and appropriately process payload signals; (2) development of criteria which will foster equipment compatibility with diverse types of payloads and signals; (3) study of operational procedures, especially those affecting signal acquisition; (4) trade-off analysis for end-to-end data link performance optimization; (5) identification of possible hardware design weakness which might degrade signal processing performance.
NASA Astrophysics Data System (ADS)
Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.
2014-11-01
IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.
Software system for data management and distributed processing of multichannel biomedical signals.
Franaszczuk, P J; Jouny, C C
2004-01-01
The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.
Early diagnostic of concurrent gear degradation processes progressing under time-varying loads
NASA Astrophysics Data System (ADS)
Guilbault, Raynald; Lalonde, Sébastien
2016-08-01
This study develops a gear diagnostic procedure for the detection of multi- and concurrent degradation processes evolving under time-varying loads. Instead of a conventional comparison between a descriptor and an alarm level, this procedure bases its detection strategy on a descriptor evolution tracking; a lasting descriptor increase denotes the presence of ongoing degradation mechanisms. The procedure works from time domain residual signals prepared in the frequency domain, and accepts any gear conditions as reference signature. To extract the load fluctuation repercussions, the procedure integrates a scaling factor. The investigation first examines a simplification assuming a linear connection between the load and the dynamic response amplitudes. However, while generally valuable, the precision losses associated with large load variations may mask the contribution of tiny flaws. To better reflect the real non-linear relation, the paper reformulates the scaling factor; a power law with an exponent value of 0.85 produces noticeable improvements of the load effect extraction. To reduce the consequences of remaining oscillations, the procedure also includes a filtering phase. During the validation program, a synthetic wear progression assuming a commensurate relation between the wear depth and friction assured controlled evolutions of the surface degradation influence, whereas the fillet crack growth remained entirely determined by the operation conditions. Globally, the tested conditions attest that the final strategy provides accurate monitoring of coexisting isolated damages and general surface deterioration, and that its tracking-detection capacities are unaffected by severe time variations of external loads. The procedure promptly detects the presence of evolving abnormal phenomena. The tests show that the descriptor curve shapes virtually describe the constant wear progression superimposed on the crack length evolution. At the tooth fracture, the mean values of the residual signal evince strong perturbations, while after this episode, the monitoring curves continue signaling the ongoing wear process.
A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.
Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei
2013-05-30
Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, C.; et al.
We describe the concept and procedure of drifted-charge extraction developed in the MicroBooNE experiment, a single-phase liquid argon time projection chamber (LArTPC). This technique converts the raw digitized TPC waveform to the number of ionization electrons passing through a wire plane at a given time. A robust recovery of the number of ionization electrons from both induction and collection anode wire planes will augment the 3D reconstruction, and is particularly important for tomographic reconstruction algorithms. A number of building blocks of the overall procedure are described. The performance of the signal processing is quantitatively evaluated by comparing extracted charge withmore » the true charge through a detailed TPC detector simulation taking into account position-dependent induced current inside a single wire region and across multiple wires. Some areas for further improvement of the performance of the charge extraction procedure are also discussed.« less
Signal processing and calibration procedures for in situ diode-laser absorption spectroscopy.
Werle, P W; Mazzinghi, P; D'Amato, F; De Rosa, M; Maurer, K; Slemr, F
2004-07-01
Gas analyzers based on tunable diode-laser spectroscopy (TDLS) provide high sensitivity, fast response and highly specific in situ measurements of several atmospheric trace gases simultaneously. Under optimum conditions even a shot noise limited performance can be obtained. For field applications outside the laboratory practical limitations are important. At ambient mixing ratios below a few parts-per-billion spectrometers become more and more sensitive towards noise, interference, drift effects and background changes associated with low level signals. It is the purpose of this review to address some of the problems which are encountered at these low levels and to describe a signal processing strategy for trace gas monitoring and a concept for in situ system calibration applicable for tunable diode-laser spectroscopy. To meet the requirement of quality assurance for field measurements and monitoring applications, procedures to check the linearity according to International Standard Organization regulations are described and some measurements of calibration functions are presented and discussed.
A Procedural Electroencephalogram Simulator for Evaluation of Anesthesia Monitors.
Petersen, Christian Leth; Görges, Matthias; Massey, Roslyn; Dumont, Guy Albert; Ansermino, J Mark
2016-11-01
Recent research and advances in the automation of anesthesia are driving the need to better understand electroencephalogram (EEG)-based anesthesia end points and to test the performance of anesthesia monitors. This effort is currently limited by the need to collect raw EEG data directly from patients. A procedural method to synthesize EEG signals was implemented in a mobile software application. The application is capable of sending the simulated signal to an anesthesia depth of hypnosis monitor. Systematic sweeps of the simulator generate functional monitor response profiles reminiscent of how network analyzers are used to test electronic components. Three commercial anesthesia monitors (Entropy, NeuroSENSE, and BIS) were compared with this new technology, and significant response and feature variations between the monitor models were observed; this includes reproducible, nonmonotonic apparent multistate behavior and significant hysteresis at light levels of anesthesia. Anesthesia monitor response to a procedural simulator can reveal significant differences in internal signal processing algorithms. The ability to synthesize EEG signals at different anesthetic depths potentially provides a new method for systematically testing EEG-based monitors and automated anesthesia systems with all sensor hardware fully operational before human trials.
NASA Astrophysics Data System (ADS)
Gregory, R. L.
1980-07-01
Perceptions may be compared with hypotheses in science. The methods of acquiring scientific knowledge provide a working paradigm for investigating processes of perception. Much as the information channels of instruments, such as radio telescopes, transmit signals which are processed according to various assumptions to give useful data, so neural signals are processed to give data for perception. To understand perception, the signal codes and the stored knowledge or assumptions used for deriving perceptual hypotheses must be discovered. Systematic perceptual errors are important clues for appreciating signal channel limitations, and for discovering hypothesis-generating procedures. Although this distinction between `physiological' and `cognitive' aspects of perception may be logically clear, it is in practice surprisingly difficult to establish which are responsible even for clearly established phenomena such as the classical distortion illusions. Experimental results are presented, aimed at distinguishing between and discovering what happens when there is mismatch with the neural signal channel, and when neural signals are processed inappropriately for the current situation. This leads us to make some distinctions between perceptual and scientific hypotheses, which raise in a new form the problem: What are `objects'?
eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.
Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura
2018-03-01
Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P < 10 -560 ) for FHR and 0.97 (P < 10 -560 ) for UC. On average, MSE value was 0.00 for both FHR and UC. No CTG feature was found significantly different when measured in eCTG vs. reference signals. eCTG procedure is a promising useful tool to accurately extract digital FHR and UC signals from digital CTG images. Copyright © 2018 Elsevier B.V. All rights reserved.
A vertical-energy-thresholding procedure for data reduction with multiple complex curves.
Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi
2006-10-01
Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.
Role of TIRAP in Myelodysplastic Syndromes
2015-04-01
process of examining other publically available sequencing data on MDS or myeloproliferative neoplasm patients for innate immune signaling gene mutations...evidence showing the presence of somatic mutations in the innate immune signaling genes in myeloid neoplasm , there is evidence of dysregulation of the...this procedure. To characterize TIRAP-induced myeloproliferation in IFNγ-KO mice, to determine the transplantability of the myeloprolifation , and to
One process is not enough! A speed-accuracy tradeoff study of recognition memory.
Boldini, Angela; Russo, Riccardo; Avons, S E
2004-04-01
Speed-accuracy tradeoff (SAT) methods have been used to contrast single- and dual-process accounts of recognition memory. In these procedures, subjects are presented with individual test items and are required to make recognition decisions under various time constraints. In this experiment, we presented word lists under incidental learning conditions, varying the modality of presentation and level of processing. At test, we manipulated the interval between each visually presented test item and a response signal, thus controlling the amount of time available to retrieve target information. Study-test modality match had a beneficial effect on recognition accuracy at short response-signal delays (< or =300 msec). Conversely, recognition accuracy benefited more from deep than from shallow processing at study only at relatively long response-signal delays (> or =300 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory.
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Brooks, Thomas F.; Humphreys, William M.; Spalt, Taylor B.; Stead, Daniel J.
2014-01-01
An advanced vehicle concept, the HWB N2A-EXTE aircraft design, was tested in NASA Langley's 14- by 22-Foot Subsonic Wind Tunnel to study its acoustic characteristics for var- ious propulsion system installation and airframe con gurations. A signi cant upgrade to existing data processing systems was implemented, with a focus on portability and a re- duction in turnaround time. These requirements were met by updating codes originally written for a cluster environment and transferring them to a local workstation while en- abling GPU computing. Post-test, additional processing of the time series was required to remove transient hydrodynamic gusts from some of the microphone time series. A novel automated procedure was developed to analyze and reject contaminated blocks of data, under the assumption that the desired acoustic signal of interest was a band-limited sta- tionary random process, and of lower variance than the hydrodynamic contamination. The procedure is shown to successfully identify and remove contaminated blocks of data and retain the desired acoustic signal. Additional corrections to the data, mainly background subtraction, shear layer refraction calculations, atmospheric attenuation and microphone directivity corrections, were all necessary for initial analysis and noise assessments. These were implemented for the post-processing of spectral data, and are shown to behave as expected.
Meck, W H
1984-01-01
Both the presentation of unbalanced stimulus probabilities and the insertion of a predictive cue prior to the signal on each trial apparently induces a strong bias to use a particular stimulus modality in order to select a temporal criterion and response rule. This attentional bias toward one modality is apparently independent of the modality of the stimulus being timed and is strongly influenced by stimulus probabilities or prior warning cues. These techniques may be useful to control trial-by-trial sequential effects that influence a subject's perceptual and response biases when signals from more than one modality are used in duration discrimination tasks. Cross-procedural generality of the effects of attentional bias was observed. An asymmetrical modality effect on the latency to begin timing was observed with both the temporal bisection and the peak procedure. The latency to begin timing light signals, but not the latency to begin timing sound signals, was increased when the signal modality was unexpected. This asymmetrical effect was explained with the assumption that sound signals close the mode switch automatically, but that light signals close the mode switch only if attention is directed to the light. The time required to switch attention is reflected in a reduction of the number of pulses from the pacemaker that enter the accumulator. One positive aspect of this work is the demonstration that procedures similar to those used to study human cognition can be used with animal subjects with similar results. Perhaps these similarities will stimulate animal research on the physiological basis of various cognitive capacities. Animal subjects would be preferred for such physiological experimentation if it were established that they possessed some of the cognitive processes described by investigators of human information processing. One of the negative aspects of this work is that only one combination of modalities was used and variables such as stimulus intensity, stimulus probability, and range of signal durations have not been adequately investigated at present. Future work might test additional combinations of modalities and vary stimulus intensity and stimulus probability within a signal detection theory (SDT) framework to determine the effects of these variables on attentional bias.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
NASA Technical Reports Server (NTRS)
Malila, W. A.; Crane, R. B.; Richardson, W.
1973-01-01
Recent improvements in remote sensor technology carry implications for data processing. Multispectral line scanners now exist that can collect data simultaneously and in registration in multiple channels at both reflective and thermal (emissive) wavelengths. Progress in dealing with two resultant recognition processing problems is discussed: (1) More channels mean higher processing costs; to combat these costs, a new and faster procedure for selecting subsets of channels has been developed. (2) Differences between thermal and reflective characteristics influence recognition processing; to illustrate the magnitude of these differences, some explanatory calculations are presented. Also introduced, is a different way to process multispectral scanner data, namely, radiation balance mapping and related procedures. Techniques and potentials are discussed and examples presented.
Liao, Lun-De; Wang, I-Jan; Chen, Sheng-Fu; Chang, Jyh-Yeong; Lin, Chin-Teng
2011-01-01
In the present study, novel dry-contact sensors for measuring electro-encephalography (EEG) signals without any skin preparation are designed, fabricated by an injection molding manufacturing process and experimentally validated. Conventional wet electrodes are commonly used to measure EEG signals; they provide excellent EEG signals subject to proper skin preparation and conductive gel application. However, a series of skin preparation procedures for applying the wet electrodes is always required and usually creates trouble for users. To overcome these drawbacks, novel dry-contact EEG sensors were proposed for potential operation in the presence or absence of hair and without any skin preparation or conductive gel usage. The dry EEG sensors were designed to contact the scalp surface with 17 spring contact probes. Each probe was designed to include a probe head, plunger, spring, and barrel. The 17 probes were inserted into a flexible substrate using a one-time forming process via an established injection molding procedure. With these 17 spring contact probes, the flexible substrate allows for high geometric conformity between the sensor and the irregular scalp surface to maintain low skin-sensor interface impedance. Additionally, the flexible substrate also initiates a sensor buffer effect, eliminating pain when force is applied. The proposed dry EEG sensor was reliable in measuring EEG signals without any skin preparation or conductive gel usage, as compared with the conventional wet electrodes.
Liao, Lun-De; Wang, I-Jan; Chen, Sheng-Fu; Chang, Jyh-Yeong; Lin, Chin-Teng
2011-01-01
In the present study, novel dry-contact sensors for measuring electro-encephalography (EEG) signals without any skin preparation are designed, fabricated by an injection molding manufacturing process and experimentally validated. Conventional wet electrodes are commonly used to measure EEG signals; they provide excellent EEG signals subject to proper skin preparation and conductive gel application. However, a series of skin preparation procedures for applying the wet electrodes is always required and usually creates trouble for users. To overcome these drawbacks, novel dry-contact EEG sensors were proposed for potential operation in the presence or absence of hair and without any skin preparation or conductive gel usage. The dry EEG sensors were designed to contact the scalp surface with 17 spring contact probes. Each probe was designed to include a probe head, plunger, spring, and barrel. The 17 probes were inserted into a flexible substrate using a one-time forming process via an established injection molding procedure. With these 17 spring contact probes, the flexible substrate allows for high geometric conformity between the sensor and the irregular scalp surface to maintain low skin-sensor interface impedance. Additionally, the flexible substrate also initiates a sensor buffer effect, eliminating pain when force is applied. The proposed dry EEG sensor was reliable in measuring EEG signals without any skin preparation or conductive gel usage, as compared with the conventional wet electrodes. PMID:22163929
Time-frequency signal analysis and synthesis - The choice of a method and its application
NASA Astrophysics Data System (ADS)
Boashash, Boualem
In this paper, the problem of choosing a method for time-frequency signal analysis is discussed. It is shown that a natural approach leads to the introduction of the concepts of the analytic signal and instantaneous frequency. The Wigner-Ville Distribution (WVD) is a method of analysis based upon these concepts and it is shown that an accurate Time-Frequency representation of a signal can be obtained by using the WVD for the analysis of a class of signals referred to as 'asymptotic'. For this class of signals, the instantaneous frequency describes an important physical parameter characteristic of the process under investigation. The WVD procedure for signal analysis and synthesis is outlined and its properties are reviewed for deterministic and random signals.
Time-Frequency Signal Analysis And Synthesis The Choice Of A Method And Its Application
NASA Astrophysics Data System (ADS)
Boashash, Boualem
1988-02-01
In this paper, the problem of choosing a method for time-frequency signal analysis is discussed. It is shown that a natural approach leads to the introduction of the concepts of the analytic signal and in-stantaneous frequency. The Wigner-Ville Distribution (WVD) is a method of analysis based upon these concepts and it is shown that an accurate Time-Frequency representation of a signal can be obtained by using the WVD for the analysis of a class of signals referred to as "asymptotic". For this class of signals, the instantaneous frequency describes an important physical parameter characteristic of the process under investigation. The WVD procedure for signal analysis and synthesis is outlined and its properties are reviewed for deterministic and random signals.
Signorini, Maria G; Fanelli, Andrea; Magenes, Giovanni
2014-01-01
Monitoring procedures are the basis to evaluate the clinical state of patients and to assess changes in their conditions, thus providing necessary interventions in time. Both these two objectives can be achieved by integrating technological development with methodological tools, thus allowing accurate classification and extraction of useful diagnostic information. The paper is focused on monitoring procedures applied to fetal heart rate variability (FHRV) signals, collected during pregnancy, in order to assess fetal well-being. The use of linear time and frequency techniques as well as the computation of non linear indices can contribute to enhancing the diagnostic power and reliability of fetal monitoring. The paper shows how advanced signal processing approaches can contribute to developing new diagnostic and classification indices. Their usefulness is evaluated by comparing two selected populations: normal fetuses and intra uterine growth restricted (IUGR) fetuses. Results show that the computation of different indices on FHRV signals, either linear and nonlinear, gives helpful indications to describe pathophysiological mechanisms involved in the cardiovascular and neural system controlling the fetal heart. As a further contribution, the paper briefly describes how the introduction of wearable systems for fetal ECG recording could provide new technological solutions improving the quality and usability of prenatal monitoring.
Modeling aging effects on two-choice tasks: response signal and response time data.
Ratcliff, Roger
2008-12-01
In the response signal paradigm, a test stimulus is presented, and then at one of a number of experimenter-determined times, a signal to respond is presented. Response signal, standard response time (RT), and accuracy data were collected from 19 college-age and 19 60- to 75-year-old participants in a numerosity discrimination task. The data were fit with 2 versions of the diffusion model. Response signal data were modeled by assuming a mixture of processes, those that have terminated before the signal and those that have not terminated; in the latter case, decisions are based on either partial information or guessing. The effects of aging on performance in the regular RT task were explained the same way in the models, with a 70- to 100-ms increase in the nondecision component of processing, more conservative decision criteria, and more variability across trials in drift and the nondecision component of processing, but little difference in drift rate (evidence). In the response signal task, the primary reason for a slower rise in the response signal functions for older participants was variability in the nondecision component of processing. Overall, the results were consistent with earlier fits of the diffusion model to the standard RT task for college-age participants and to the data from aging studies using this task in the standard RT procedure. Copyright (c) 2009 APA, all rights reserved.
Modeling Aging Effects on Two-Choice Tasks: Response Signal and Response Time Data
Ratcliff, Roger
2009-01-01
In the response signal paradigm, a test stimulus is presented, and then at one of a number of experimenter-determined times, a signal to respond is presented. Response signal, standard response time (RT), and accuracy data were collected from 19 college-age and 19 60- to 75-year-old participants in a numerosity discrimination task. The data were fit with 2 versions of the diffusion model. Response signal data were modeled by assuming a mixture of processes, those that have terminated before the signal and those that have not terminated; in the latter case, decisions are based on either partial information or guessing. The effects of aging on performance in the regular RT task were explained the same way in the models, with a 70- to 100-ms increase in the nondecision component of processing, more conservative decision criteria, and more variability across trials in drift and the nondecision component of processing, but little difference in drift rate (evidence). In the response signal task, the primary reason for a slower rise in the response signal functions for older participants was variability in the nondecision component of processing. Overall, the results were consistent with earlier fits of the diffusion model to the standard RT task for college-age participants and to the data from aging studies using this task in the standard RT procedure. PMID:19140659
A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-06-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-10-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
NASA Astrophysics Data System (ADS)
Liu, W.; Du, M. H.; Chan, Francis H. Y.; Lam, F. K.; Luk, D. K.; Hu, Y.; Fung, Kan S. M.; Qiu, W.
1998-09-01
Recently there has been a considerable interest in the use of a somatosensory evoked potential (SEP) for monitoring the functional integrity of the spinal cord during surgery such as spinal scoliosis. This paper describes a monitoring system and signal processing algorithms, which consists of 50 Hz mains filtering and a wavelet signal analyzer. Our system allows fast detection of changes in SEP peak latency, amplitude and signal waveform, which are the main parameters of interest during intra-operative procedures.
A Method for Implementing Force-Limited Vibration Control
NASA Technical Reports Server (NTRS)
Worth, Daniel B.
1997-01-01
NASA/GSFC has implemented force-limited vibration control on a controller which can only accept one profile. The method uses a personal computer based digital signal processing board to convert force and/or moment signals into what appears to he an acceleration signal to the controller. This technique allows test centers with older controllers to use the latest force-limited control techniques for random vibration testing. The paper describes the method, hardware, and test procedures used. An example from a test performed at NASA/GSFC is used as a guide.
Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)
NASA Astrophysics Data System (ADS)
Raskovic, Dejan
Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.
NASA Technical Reports Server (NTRS)
1975-01-01
Signal processing equipment specifications, operating and test procedures, and systems design and engineering are described. Five subdivisions of the overall circuitry are treated: (1) the spectrum analyzer; (2) the spectrum integrator; (3) the velocity discriminator; (4) the display interface; and (5) the formatter. They function in series: (1) first in analog form to provide frequency resolution, (2) then in digital form to achieve signal to noise improvement (video integration) and frequency discrimination, and (3) finally in analog form again for the purpose of real-time display of the significant velocity data. The formatter collects binary data from various points in the processor and provides a serial output for bi-phase recording. Block diagrams are used to illustrate the system.
Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo
2008-01-01
Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984
Imaging of dynamic ion signaling during root gravitropism.
Monshausen, Gabriele B
2015-01-01
Gravitropic signaling is a complex process that requires the coordinated action of multiple cell types and tissues. Ca(2+) and pH signaling are key components of gravitropic signaling cascades and can serve as useful markers to dissect the molecular machinery mediating plant gravitropism. To monitor dynamic ion signaling, imaging approaches combining fluorescent ion sensors and confocal fluorescence microscopy are employed, which allow the visualization of pH and Ca(2+) changes at the level of entire tissues, while also providing high spatiotemporal resolution. Here, I describe procedures to prepare Arabidopsis seedlings for live cell imaging and to convert a microscope for vertical stage fluorescence microscopy. With this imaging system, ion signaling can be monitored during all phases of the root gravitropic response.
Recollection is a continuous process: implications for dual-process theories of recognition memory.
Mickes, Laura; Wais, Peter E; Wixted, John T
2009-04-01
Dual-process theory, which holds that recognition decisions can be based on recollection or familiarity, has long seemed incompatible with signal detection theory, which holds that recognition decisions are based on a singular, continuous memory-strength variable. Formal dual-process models typically regard familiarity as a continuous process (i.e., familiarity comes in degrees), but they construe recollection as a categorical process (i.e., recollection either occurs or does not occur). A continuous process is characterized by a graded relationship between confidence and accuracy, whereas a categorical process is characterized by a binary relationship such that high confidence is associated with high accuracy but all lower degrees of confidence are associated with chance accuracy. Using a source-memory procedure, we found that the relationship between confidence and source-recollection accuracy was graded. Because recollection, like familiarity, is a continuous process, dual-process theory is more compatible with signal detection theory than previously thought.
Noise-assisted data processing with empirical mode decomposition in biomedical signals.
Karagiannis, Alexandros; Constantinou, Philip
2011-01-01
In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.
Naseri, H; Homaeinezhad, M R; Pourkhajeh, H
2013-09-01
The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vehicular headways on signalized intersections: theory, models, and reality
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Šleis, Jiří
2015-01-01
We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.
Mathematical model with autoregressive process for electrocardiogram signals
NASA Astrophysics Data System (ADS)
Evaristo, Ronaldo M.; Batista, Antonio M.; Viana, Ricardo L.; Iarosz, Kelly C.; Szezech, José D., Jr.; Godoy, Moacir F. de
2018-04-01
The cardiovascular system is composed of the heart, blood and blood vessels. Regarding the heart, cardiac conditions are determined by the electrocardiogram, that is a noninvasive medical procedure. In this work, we propose autoregressive process in a mathematical model based on coupled differential equations in order to obtain the tachograms and the electrocardiogram signals of young adults with normal heartbeats. Our results are compared with experimental tachogram by means of Poincaré plot and dentrended fluctuation analysis. We verify that the results from the model with autoregressive process show good agreement with experimental measures from tachogram generated by electrical activity of the heartbeat. With the tachogram we build the electrocardiogram by means of coupled differential equations.
NASA Astrophysics Data System (ADS)
Saccorotti, G.; Nisii, V.; Del Pezzo, E.
2008-07-01
Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP components (T > 2s) of the explosion recordings from the broad-band network. Source locations obtained this way are fully compatible with those retrieved from application of more traditional (and computationally expensive) time-domain techniques, such as the Radial Semblance method.
Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng
2013-08-01
Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.
Nonword repetition in lexical decision: support for two opposing processes.
Wagenmakers, Eric-Jan; Zeelenberg, René; Steyvers, Mark; Shiffrin, Richard; Raaijmakers, Jeroen
2004-10-01
We tested and confirmed the hypothesis that the prior presentation of nonwords in lexical decision is the net result of two opposing processes: (1) a relatively fast inhibitory process based on global familiarity; and (2) a relatively slow facilitatory process based on the retrieval of specific episodic information. In three studies, we manipulated speed-stress to influence the balance between the two processes. Experiment 1 showed item-specific improvement for repeated nonwords in a standard "respond-when-ready" lexical decision task. Experiment 2 used a 400-ms deadline procedure and showed performance for nonwords to be unaffected by up to four prior presentations. In Experiment 3 we used a signal-to-respond procedure with variable time intervals and found negative repetition priming for repeated nonwords. These results can be accounted for by dual-process models of lexical decision.
NASA Astrophysics Data System (ADS)
Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.
2018-02-01
Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.
Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.
2018-01-01
Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400
Optimal Signal Filtration in Optical Sensors with Natural Squeezing of Vacuum Noises
NASA Technical Reports Server (NTRS)
Gusev, A. V.; Kulagin, V. V.
1996-01-01
The structure of optimal receiver is discussed for optical sensor measuring a small displacement of probe mass. Due to nonlinear interaction of the field and the mirror, a reflected wave is in squeezed state (natural squeezing), two quadratures of which are correlated and therefore one can increase signal-to-noise ratio and overcome the SQL. A measurement procedure realizing such correlation processing of two quadratures is clarified. The required combination of quadratures can be produced via mixing of pump field reflected from the mirror with local oscillator phase modulated field in duel-detector homodyne scheme. Such measurement procedure could be useful not only for resonant bar gravitational detector but for laser longbase interferometric detectors as well.
Custom modular electromagnetic induction system for shallow electrical conductivity measurements
NASA Astrophysics Data System (ADS)
Mester, Achim; Zimmermann, Egon; Tan, Xihe; von Hebel, Christian; van der Kruk, Jan; van Waasen, Stefan
2017-04-01
Electromagnetic induction (EMI) is a contactless measurement method that offers fast and easy investigations of the shallow electrical conductivity, e.g. on the field-scale. Available frequency domain EMI systems offer multiple fixed transmitter-receiver (Tx-Rx) pairs with Tx-Rx separations between 0.3 and 4.0 m and investigation depths of up to six meters. Here, we present our custom EMI system that consists of modular sensor units that can either be transmitters or receivers, and a backpack containing the data acquisition system. The prototype system is optimized for frequencies between 5 and 30 kHz and Tx-Rx separations between 0.4 and 2.0 m. Each Tx and Rx signal is digitized separately and stored on a notebook computer. The soil conductivity information is determined after the measurements with advanced digital processing of the data using optimized correction and calibration procedures. The system stores the raw data throughout the entire procedure, which offers many advantages: (1) comprehensive accuracy and error analysis as well as the reproducibility of corrections and calibration procedures; (2) easy customizability of the number of Tx-/Rx-units and their arrangement and frequencies; (3) signals from simultaneously working transmitters can be separated within the received data using orthogonal signals, resulting in additional Tx-Rx pairs and maximized soil information; and (4) later improvements in the post-processing algorithms can be applied to old data sets. Exemplary, here we present an innovative setup with two transmitters and five receivers using orthogonal signals yielding ten Tx-Rx pairs. Note that orthogonal signals enable for redundant Tx-Rx pairs that are useful for verification of the transmitter signals and for data stacking. In contrast to commercial systems, only adjustments in the post-processing were necessary to realize such measurement configurations with flexibly combined Tx and Rx modules. The presented system reaches an accuracy of up to 1 mS/m and was also evaluated by surface measurements with the sensor modules mounted to a sled and moved along a bare soil field transect. Measured data were calibrated for quantitative apparent electrical conductivity using reference data at certain calibration locations. Afterwards, data were inverted for electrical conductivity over depth using a multi-layer inversion showing similar conductivity distributions as the reference data.
Attenuation of harmonic noise in vibroseis data using Simulated Annealing
NASA Astrophysics Data System (ADS)
Sharma, S. P.; Tildy, Peter; Iranpour, Kambiz; Scholtz, Peter
2009-04-01
Processing of high productivity vibroseis seismic data (such as slip-sweep acquisition records) suffers from the well known disadvantage of harmonic distortion. Harmonic distortions are observed after cross-correlation of the recorded seismic signal with the pilot sweep and affect the signals in negative time (before the actual strong reflection event). Weak reflection events of the earlier sweeps falling in the negative time window of the cross-correlation sequence are being masked by harmonic distortions. Though the amplitude of the harmonic distortion is small (up to 10-20 %) compared to the fundamental amplitude of the reflection events, but it is significant enough to mask weak reflected signals. Elimination of harmonic noise due to source signal distortion from the cross-correlated seismic trace is a challenging task since the application of vibratory sources started and it still needs improvement. An approach has been worked out that minimizes the level of harmonic distortion by designing the signal similar to the harmonic distortion. An arbitrary length filter is optimized using the Simulated Annealing global optimization approach to design a harmonic signal. The approach deals with the convolution of a ratio trace (ratio of the harmonics with respect to the fundamental sweep) with the correlated "positive time" recorded signal and an arbitrary filter. Synthetic data study has revealed that this procedure of designing a signal similar to the desired harmonics using convolution of a suitable filter with theoretical ratio of harmonics with fundamental sweep helps in reducing the problem of harmonic distortion. Once we generate a similar signal for a vibroseis source using an optimized filter, then, this filter could be used to generate harmonics, which can be subtracted from the main cross-correlated trace to get the better, undistorted image of the subsurface. Designing the predicted harmonics to reduce the energy in the trace by considering weak reflection and observed harmonics together yields the desired result (resolution of weak reflected signal from the harmonic distortion). As optimization steps proceeds forward it is possible to observe from the difference plots of desired and predicted harmonics how weak reflections evolved from the harmonic distortion gradually during later iterations of global optimization. The procedure is applied in resolving weak reflections from a number of traces considered together. For a more precise design of harmonics SA procedure needs longer computation time which is impractical to deal with voluminous seismic data. However, the objective of resolving weak reflection signal in the strong harmonic noise can be achieved with fast computation using faster cooling schedule and less number of iterations and number of moves in simulated annealing procedure. This process could help in reducing the harmonics distortion and achieving the objective of resolving the lost weak reflection events in the cross-correlated seismic traces. Acknowledgements: The research was supported under the European Marie Curie Host Fellowships for Transfer of Knowledge (TOK) Development Host Scheme (contract no. MTKD-CT-2006-042537).
Buhusi, Catalin V.; Lamoureux, Jeffrey A.; Meck, Warren H.
2008-01-01
The effects of prenatal choline availability on contextual processing in a 30-s peak-interval (PI) procedure with gaps (1, 5, 10, and 15 s) were assessed in adult male rats. Neither supplementation nor deprivation of prenatal choline affected baseline timing performance in the PI procedure. However, prenatal choline availability significantly altered the contextual processing of gaps inserted into the to-be-timed signal (light on). Choline-supplemented rats displayed a high degree of context sensitivity as indicated by clock resetting when presented with a gap in the signal (light off). In contrast, choline-deficient rats showed no such effect and stopped their clocks during the gap. Control rats exhibited an intermediate level of contextual processing in between stop and full reset. When switched to a reversed gap condition in which rats timed the absence of the light and the presence of the light served as a gap, all groups reset their clocks following a gap. Furthermore, when filling the intertrial interval (ITI) with a distinctive stimulus (e.g., sound), both choline-supplemented and control rats rightward shifted their PI functions less on trials with gaps than choline-deficient rats, indicating greater contextual sensitivity and reduced clock resetting under these conditions. Overall, these data support the view that prenatal choline availability affects the sensitivity to the context in which gaps are inserted in the to-be-timed signal, thereby influencing whether rats run, stop, or reset their clocks. PMID:18778696
1987-01-01
the results of that problem to be applied to deblurring . Four procedures for finding the maximum entropy solution have been developed and have becn...distortion operator h, converges quadratically to an impulse and, as a result, the restoration x, converges quadratically to x. Therefore, when the standard...is concerned with the modeling of a * signal as the sum of sinusoids in white noise where the sinusoidal frequencies are varying as a function of time
A CWT-based methodology for piston slap experimental characterization
NASA Astrophysics Data System (ADS)
Buzzoni, M.; Mucchi, E.; Dalpiaz, G.
2017-03-01
Noise and vibration control in mechanical systems has become ever more significant for automotive industry where the comfort of the passenger compartment represents a challenging issue for car manufacturers. The reduction of piston slap noise is pivotal for a good design of IC engines. In this scenario, a methodology has been developed for the vibro-acoustic assessment of IC diesel engines by means of design changes in piston to cylinder bore clearance. Vibration signals have been analysed by means of advanced signal processing techniques taking advantage of cyclostationarity theory. The procedure departs from the analysis of the Continuous Wavelet Transform (CWT) in order to identify a representative frequency band of piston slap phenomenon. Such a frequency band has been exploited as the input data in the further signal processing analysis that involves the envelope analysis of the second order cyclostationary component of the signal. The second order harmonic component has been used as the benchmark parameter of piston slap noise. An experimental procedure of vibrational benchmarking is proposed and verified at different operational conditions in real IC engines actually equipped on cars. This study clearly underlines the crucial role of the transducer positioning when differences among real piston-to-cylinder clearances are considered. In particular, the proposed methodology is effective for the sensors placed on the outer cylinder wall in all the tested conditions.
Clustering for unsupervised fault diagnosis in nuclear turbine shut-down transients
NASA Astrophysics Data System (ADS)
Baraldi, Piero; Di Maio, Francesco; Rigamonti, Marco; Zio, Enrico; Seraoui, Redouane
2015-06-01
Empirical methods for fault diagnosis usually entail a process of supervised training based on a set of examples of signal evolutions "labeled" with the corresponding, known classes of fault. However, in practice, the signals collected during plant operation may be, very often, "unlabeled", i.e., the information on the corresponding type of occurred fault is not available. To cope with this practical situation, in this paper we develop a methodology for the identification of transient signals showing similar characteristics, under the conjecture that operational/faulty transient conditions of the same type lead to similar behavior in the measured signals evolution. The methodology is founded on a feature extraction procedure, which feeds a spectral clustering technique, embedding the unsupervised fuzzy C-means (FCM) algorithm, which evaluates the functional similarity among the different operational/faulty transients. A procedure for validating the plausibility of the obtained clusters is also propounded based on physical considerations. The methodology is applied to a real industrial case, on the basis of 148 shut-down transients of a Nuclear Power Plant (NPP) steam turbine.
Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Richard J.
2003-01-01
Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.
Radioastronomic signal processing cores for the SKA radio telescope
NASA Astrophysics Data System (ADS)
Comorett, G.; Chiarucc, S.; Belli, C.
Modern radio telescopes require the processing of wideband signals, with sample rates from tens of MHz to tens of GHz, and are composed from hundreds up to a million of individual antennas. Digital signal processing of these signals include digital receivers (the digital equivalent of the heterodyne receiver), beamformers, channelizers, spectrometers. FPGAs present the advantage of providing a relatively low power consumption, relative to GPUs or dedicated computers, a wide signal data path, and high interconnectivity. Efficient algorithms have been developed for these applications. Here we will review some of the signal processing cores developed for the SKA telescope. The LFAA beamformer/channelizer architecture is based on an oversampling channelizer, where the channelizer output sampling rate and channel spacing can be set independently. This is useful where an overlap between adjacent channels is required to provide an uniform spectral coverage. The architecture allows for an efficient and distributed channelization scheme, with a final resolution corresponding to a million of spectral channels, minimum leakage and high out-of-band rejection. An optimized filter design procedure is used to provide an equiripple response with a very large number of spectral channels. A wideband digital receiver has been designed in order to select the processed bandwidth of the SKA Mid receiver. The receiver extracts a 2.5 MHz bandwidth form a 14 GHz input bandwidth. The design allows for non-integer ratios between the input and output sampling rates, with a resource usage comparable to that of a conventional decimating digital receiver. Finally, some considerations on quantization of radioastronomic signals are presented. Due to the stochastic nature of the signal, quantization using few data bits is possible. Good accuracies and dynamic range are possible even with 2-3 bits, but the nonlinearity in the correlation process must be corrected in post-processing. With at least 6 bits it is possible to have a very linear response of the instrument, with nonlinear terms below 80 dB, providing the signal amplitude is kept within bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubberke, Frithjof H.; Baumhögger, Elmar; Vrabec, Jadran, E-mail: jadran.vrabec@upb.de
2015-05-15
The pulse-echo technique determines the propagation time of acoustic wave bursts in a fluid over a known propagation distance. It is limited by the signal quality of the received echoes of the acoustic wave bursts, which degrades with decreasing density of the fluid due to acoustic impedance and attenuation effects. Signal sampling is significantly improved in this work by burst design and signal processing such that a wider range of thermodynamic states can be investigated. Applying a Fourier transformation based digital filter on acoustic wave signals increases their signal-to-noise ratio and enhances their time and amplitude resolutions, improving the overallmore » measurement accuracy. In addition, burst design leads to technical advantages for determining the propagation time due to the associated conditioning of the echo. It is shown that the according operation procedure enlarges the measuring range of the pulse-echo technique for supercritical argon and nitrogen at 300 K down to 5 MPa, where it was limited to around 20 MPa before.« less
Procedures for using signals from one sensor as substitutes for signals of another
NASA Technical Reports Server (NTRS)
Suits, G.; Malila, W.; Weller, T.
1988-01-01
Long-term monitoring of surface conditions may require a transfer from using data from one satellite sensor to data from a different sensor having different spectral characteristics. Two general procedures for spectral signal substitution are described in this paper, a principal-components procedure and a complete multivariate regression procedure. They are evaluated through a simulation study of five satellite sensors (MSS, TM, AVHRR, CZCS, and HRV). For illustration, they are compared to another recently described procedure for relating AVHRR and MSS signals. The multivariate regression procedure is shown to be best. TM can accurately emulate the other sensors, but they, on the other hand, have difficulty in accurately emulating its shortwave infrared bands (TM5 and TM7).
NASA Astrophysics Data System (ADS)
García Plaza, E.; Núñez López, P. J.
2018-01-01
On-line monitoring of surface finish in machining processes has proven to be a substantial advancement over traditional post-process quality control techniques by reducing inspection times and costs and by avoiding the manufacture of defective products. This study applied techniques for processing cutting force signals based on the wavelet packet transform (WPT) method for the monitoring of surface finish in computer numerical control (CNC) turning operations. The behaviour of 40 mother wavelets was analysed using three techniques: global packet analysis (G-WPT), and the application of two packet reduction criteria: maximum energy (E-WPT) and maximum entropy (SE-WPT). The optimum signal decomposition level (Lj) was determined to eliminate noise and to obtain information correlated to surface finish. The results obtained with the G-WPT method provided an in-depth analysis of cutting force signals, and frequency ranges and signal characteristics were correlated to surface finish with excellent results in the accuracy and reliability of the predictive models. The radial and tangential cutting force components at low frequency provided most of the information for the monitoring of surface finish. The E-WPT and SE-WPT packet reduction criteria substantially reduced signal processing time, but at the expense of discarding packets with relevant information, which impoverished the results. The G-WPT method was observed to be an ideal procedure for processing cutting force signals applied to the real-time monitoring of surface finish, and was estimated to be highly accurate and reliable at a low analytical-computational cost.
Wind Tunnel Simulations of the Mock Urban Setting Test - Experimental Procedures and Data Analysis
2004-07-01
depends on the subjective choice of points to include in the constant stress region. This is demonstrated by the marked difference in the slope for the two...designed explicitly for the analysis of time series and signal processing , particularly for atmospheric dispersion ex- periments. The scripts developed...below. Processing scripts are available for all these analyses in the /scripts directory. All files of figures and processed data resulting from these
1984-06-01
and shift varying deblurring of images. mui W AcCOan~MP ins Several of the techniques which have been investigated under this work unit are based upon...concern with the use of these iterative algorithms for deconvolution is the effect of noise on the restoration. In the absence of constraints on the...perform badly in the presence of broadband noise . An ad A hoc procedure which improves performance is to prefilter the data to enhance the signal-to
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Modeling Response Signal and Response Time Data
ERIC Educational Resources Information Center
Ratcliff, Roger
2006-01-01
The diffusion model (Ratcliff, 1978) and the leaky competing accumulator model (LCA, Usher & McClelland, 2001) were tested against two-choice data collected from the same subjects with the standard response time procedure and the response signal procedure. In the response signal procedure, a stimulus is presented and then, at one of a number of…
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Unsupervised pattern recognition methods in ciders profiling based on GCE voltammetric signals.
Jakubowska, Małgorzata; Sordoń, Wanda; Ciepiela, Filip
2016-07-15
This work presents a complete methodology of distinguishing between different brands of cider and ageing degrees, based on voltammetric signals, utilizing dedicated data preprocessing procedures and unsupervised multivariate analysis. It was demonstrated that voltammograms recorded on glassy carbon electrode in Britton-Robinson buffer at pH 2 are reproducible for each brand. By application of clustering algorithms and principal component analysis visible homogenous clusters were obtained. Advanced signal processing strategy which included automatic baseline correction, interval scaling and continuous wavelet transform with dedicated mother wavelet, was a key step in the correct recognition of the objects. The results show that voltammetry combined with optimized univariate and multivariate data processing is a sufficient tool to distinguish between ciders from various brands and to evaluate their freshness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Arun, Mike W J; Yoganandan, Narayan; Stemper, Brian D; Pintar, Frank A
2014-12-01
While studies have used acoustic sensors to determine fracture initiation time in biomechanical studies, a systematic procedure is not established to process acoustic signals. The objective of the study was to develop a methodology to condition distorted acoustic emission data using signal processing techniques to identify fracture initiation time. The methodology was developed from testing a human cadaver lumbar spine column. Acoustic sensors were glued to all vertebrae, high-rate impact loading was applied, load-time histories were recorded (load cell), and fracture was documented using CT. Compression fracture occurred to L1 while other vertebrae were intact. FFT of raw voltage-time traces were used to determine an optimum frequency range associated with high decibel levels. Signals were bandpass filtered in this range. Bursting pattern was found in the fractured vertebra while signals from other vertebrae were silent. Bursting time was associated with time of fracture initiation. Force at fracture was determined using this time and force-time data. The methodology is independent of selecting parameters a priori such as fixing a voltage level(s), bandpass frequency and/or using force-time signal, and allows determination of force based on time identified during signal processing. The methodology can be used for different body regions in cadaver experiments. Copyright © 2014 Elsevier Ltd. All rights reserved.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad
2018-05-22
The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Blind estimation of reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.
2003-11-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Online estimation of room reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.
2003-04-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Airborne gamma-ray spectra processing: Extracting photopeaks.
Druker, Eugene
2018-07-01
The acquisition of information from the airborne gamma-ray spectra is based on the ability to evaluate photopeak areas in regular spectra from natural and other sources. In airborne gamma-ray spectrometry, extraction of photopeaks of radionuclides from regular one-second spectra is a complex problem. In the region of higher energies, difficulties are associated with low signal level, i.e. low count rates, whereas at lower energies difficulties are associated with high noises due to a high signal level. In this article, a new procedure is proposed for processing the measured spectra up to and including the extraction of evident photopeaks. The procedure consists of reducing the noise in the energy channels along the flight lines, transforming the spectra into the spectra of equal resolution, removing the background from each spectrum, sharpening the details, and transforming the spectra back to the original energy scale. The resulting spectra are better suited for examining and using the photopeaks. No assumptions are required regarding the number, locations, and magnitudes of photopeaks. The procedure does not generate negative photopeaks. The resolution of the spectrometer is used for the purpose. The proposed methodology, apparently, will contribute also to study environmental problems, soil characterization, and other near-surface geophysical methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
New Perspectives on Assessing Amplification Effects
Souza, Pamela E.; Tremblay, Kelly L.
2006-01-01
Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734
Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).
Plitt, Mark; Kundu, Prantik; Bandettini, Peter A.; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion). PMID:28880888
Thanaraj, Palani; Roshini, Mable; Balasubramanian, Parvathavarthini
2016-11-14
The fetal electrocardiogram (FECG) signals are essential to monitor the health condition of the baby. Fetal heart rate (FHR) is commonly used for diagnosing certain abnormalities in the formation of the heart. Usually, non-invasive abdominal electrocardiogram (AbECG) signals are obtained by placing surface electrodes in the abdomen region of the pregnant woman. AbECG signals are often not suitable for the direct analysis of fetal heart activity. Moreover, the strength and magnitude of the FECG signals are low compared to the maternal electrocardiogram (MECG) signals. The MECG signals are often superimposed with the FECG signals that make the monitoring of FECG signals a difficult task. Primary goal of the paper is to separate the fetal electrocardiogram (FECG) signals from the unwanted maternal electrocardiogram (MECG) signals. A multivariate signal processing procedure is proposed here that combines the Multivariate Empirical Mode Decomposition (MEMD) and Independent Component Analysis (ICA). The proposed method is evaluated with clinical abdominal signals taken from three pregnant women (N= 3) recorded during the 38-41 weeks of the gestation period. The number of fetal R-wave detected (NEFQRS), the number of unwanted maternal peaks (NMQRS), the number of undetected fetal R-wave (NUFQRS) and the FHR detection accuracy quantifies the performance of our method. Clinical investigation with three test subjects shows an overall detection accuracy of 92.8%. Comparative analysis with benchmark signal processing method such as ICA suggests the noteworthy performance of our method.
Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging
Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao
2016-01-01
Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114
Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis
NASA Astrophysics Data System (ADS)
Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.
2016-09-01
Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.
Hosten, Bernard; Moreau, Ludovic; Castaings, Michel
2007-06-01
The paper presents a Fourier transform-based signal processing procedure for quantifying the reflection and transmission coefficients and mode conversion of guided waves diffracted by defects in plates made of viscoelastic materials. The case of the S(0) Lamb wave mode incident on a notch in a Perspex plate is considered. The procedure is applied to numerical data produced by a finite element code that simulates the propagation of attenuated guided modes and their diffraction by the notch, including mode conversion. Its validity and precision are checked by the way of the energy balance computation and by comparison with results obtained using an orthogonality relation-based processing method.
Adler, D; Mahler, Y
1980-04-01
A procedure for automatic detection and digital processing of the maximum first derivative of the intraventricular pressure (dp/dtmax), time to dp/dtmax(t - dp/dt) and beat-to-beat intervals have been developed. The procedure integrates simple electronic circuits with a short program using a simple algorithm for the detection of the points of interest. The tasks of differentiating the pressure signal and detecting the onset of contraction were done by electronics, while the tasks of finding the values of dp/dtmax, t - dp/dt, beat-to-beat intervals and all computations needed were done by software. Software/hardware 'trade off' considerations and the accuracy and reliability of the system are discussed.
Automated eddy current analysis of materials
NASA Technical Reports Server (NTRS)
Workman, Gary L.
1990-01-01
This research effort focused on the use of eddy current techniques for characterizing flaws in graphite-based filament-wound cylindrical structures. A major emphasis was on incorporating artificial intelligence techniques into the signal analysis portion of the inspection process. Developing an eddy current scanning system using a commercial robot for inspecting graphite structures (and others) has been a goal in the overall concept and is essential for the final implementation for expert system interpretation. Manual scans, as performed in the preliminary work here, do not provide sufficiently reproducible eddy current signatures to be easily built into a real time expert system. The expert systems approach to eddy current signal analysis requires that a suitable knowledge base exist in which correct decisions as to the nature of the flaw can be performed. In eddy current or any other expert systems used to analyze signals in real time in a production environment, it is important to simplify computational procedures as much as possible. For that reason, we have chosen to use the measured resistance and reactance values for the preliminary aspects of this work. A simple computation, such as phase angle of the signal, is certainly within the real time processing capability of the computer system. In the work described here, there is a balance between physical measurements and finite element calculations of those measurements. The goal is to evolve into the most cost effective procedures for maintaining the correctness of the knowledge base.
Urban underground infrastructure mapping and assessment
NASA Astrophysics Data System (ADS)
Huston, Dryver; Xia, Tian; Zhang, Yu; Fan, Taian; Orfeo, Dan; Razinger, Jonathan
2017-04-01
This paper outlines and discusses a few associated details of a smart cities approach to the mapping and condition assessment of urban underground infrastructure. Underground utilities are critical infrastructure for all modern cities. They carry drinking water, storm water, sewage, natural gas, electric power, telecommunications, steam, etc. In most cities, the underground infrastructure reflects the growth and history of the city. Many components are aging, in unknown locations with congested configurations, and in unknown condition. The technique uses sensing and information technology to determine the state of infrastructure and provide it in an appropriate, timely and secure format for managers, planners and users. The sensors include ground penetrating radar and buried sensors for persistent sensing of localized conditions. Signal processing and pattern recognition techniques convert the data in information-laden databases for use in analytics, graphical presentations, metering and planning. The presented data are from construction of the St. Paul St. CCTA Bus Station Project in Burlington, VT; utility replacement sites in Winooski, VT; and laboratory tests of smart phone position registration and magnetic signaling. The soil conditions encountered are favorable for GPR sensing and make it possible to locate buried pipes and soil layers. The present state of the art is that the data collection and processing procedures are manual and somewhat tedious, but that solutions for automating these procedures appear to be viable. Magnetic signaling with moving permanent magnets has the potential for sending lowfrequency telemetry signals through soils that are largely impenetrable by other electromagnetic waves.
Signaling Architectures that Transmit Unidirectional Information Despite Retroactivity.
Shah, Rushina; Del Vecchio, Domitilla
2017-08-08
A signaling pathway transmits information from an upstream system to downstream systems, ideally in a unidirectional fashion. A key obstacle to unidirectional transmission is retroactivity, the additional reaction flux that affects a system once its species interact with those of downstream systems. This raises the fundamental question of whether signaling pathways have developed specialized architectures that overcome retroactivity and transmit unidirectional signals. Here, we propose a general procedure based on mathematical analysis that provides an answer to this question. Using this procedure, we analyze the ability of a variety of signaling architectures to transmit one-way (from upstream to downstream) signals, as key biological parameters are tuned. We find that single stage phosphorylation and phosphotransfer systems that transmit signals from a kinase show a stringent design tradeoff that hampers their ability to overcome retroactivity. Interestingly, cascades of these architectures, which are highly represented in nature, can overcome this tradeoff and thus enable unidirectional transmission. By contrast, phosphotransfer systems, and single and double phosphorylation cycles that transmit signals from a substrate, are unable to mitigate retroactivity effects, even when cascaded, and hence are not well suited for unidirectional information transmission. These results are largely independent of the specific reaction-rate constant values, and depend on the topology of the architectures. Our results therefore identify signaling architectures that, allowing unidirectional transmission of signals, embody modular processes that conserve their input/output behavior across multiple contexts. These findings can be used to decompose natural signal transduction networks into modules, and at the same time, they establish a library of devices that can be used in synthetic biology to facilitate modular circuit design. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Characterization of a signal recording system for accurate velocity estimation using a VISAR
NASA Astrophysics Data System (ADS)
Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.
2018-02-01
The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.
Evaluation of signal timing and coordination procedures.
DOT National Transportation Integrated Search
1985-01-01
Based on a review of available literature, recommended procedures for timing the various types of signals are provided. Specifically, procedures are included for both pretimed and vehicle-actuated controllers located at isolated intersections and at ...
NASA Astrophysics Data System (ADS)
Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.
2015-10-01
Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.
Wang, Shau-Chun; Lin, Chiao-Juan; Chiang, Shu-Min; Yu, Sung-Nien
2008-03-15
This paper reports a simple chemometric technique to alter the noise spectrum of a liquid chromatography-mass spectrometry (LC-MS) chromatogram between two consecutive second-derivative filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. This technique is to multiply one second-derivative filtered LC-MS chromatogram with another artificial chromatogram added with thermal noises prior to the other second-derivative filter. Because the second-derivative filter cannot eliminate frequency components within its own filter bandwidth, more efficient peak S/N ratio improvement cannot be accomplished using consecutive second-derivative filter procedures to process LC-MS chromatograms. In contrast, when the second-derivative filtered LC-MS chromatogram is conditioned with the multiplication alteration prior to the other second-derivative filter, much better ratio improvement is achieved. The noise frequency spectrum of the second-derivative filtered chromatogram, which originally contains frequency components within the filter bandwidth, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the other regimes, the other second-derivative filter, working as a band-pass filter, is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS chromatograms, of which 5-fold peak S/N ratio improvement achieved with two consecutive second-derivative filters remains the same S/N ratio improvement using a one-step second-derivative filter, are improved to accomplish much better ratio enhancement, approximately 25-fold or higher when the noise frequency spectrum is modified between two matched filters. The linear standard curve using the filtered LC-MS signals is validated. The filtered LC-MS signals are also more reproducible. The more accurate determinations of very low-concentration samples (S/N ratio about 5-7) are obtained via standard addition procedures using the filtered signals rather than the determinations using the original signals.
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
Chen, Szi-Wen; Chen, Yuan-Ho
2015-01-01
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290
Chen, Szi-Wen; Chen, Yuan-Ho
2015-10-16
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.
Digital holographic 3D imaging spectrometry (a review)
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2017-09-01
This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.
Ultrasonic inspection of carbon fiber reinforced plastic by means of sample-recognition methods
NASA Technical Reports Server (NTRS)
Bilgram, R.
1985-01-01
In the case of carbon fiber reinforced plastic (CFRP), it has not yet been possible to detect nonlocal defects and material degradation related to aging with the aid of nondestructive inspection method. An approach for overcoming difficulties regarding such an inspection involves an extension of the ultrasonic inspection procedure on the basis of a use of signal processing and sample recognition methods. The basic concept involved in this approach is related to the realization that the ultrasonic signal contains information regarding the medium which is not utilized in conventional ultrasonic inspection. However, the analytical study of the phyiscal processes involved is very complex. For this reason, an empirical approach is employed to make use of the information which has not been utilized before. This approach uses reference signals which can be obtained with material specimens of different quality. The implementation of these concepts for the supersonic inspection of CFRP laminates is discussed.
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation.
Segreto, Tiziana; Caggiano, Alessandra; Karam, Sara; Teti, Roberto
2017-12-12
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation
Segreto, Tiziana; Karam, Sara; Teti, Roberto
2017-01-01
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions. PMID:29231864
a Universal De-Noising Algorithm for Ground-Based LIDAR Signal
NASA Astrophysics Data System (ADS)
Ma, Xin; Xiang, Chengzhi; Gong, Wei
2016-06-01
Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.
Kepler AutoRegressive Planet Search: Motivation & Methodology
NASA Astrophysics Data System (ADS)
Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian
2015-08-01
The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.
Mathematical representation of joint time-chroma distributions
NASA Astrophysics Data System (ADS)
Wakefield, Gregory H.
1999-11-01
Originally coined by the sensory psychologist Roger Shepard in the 1960s, chroma transforms frequency into octave equivalence classes. By extending the concept of chroma to chroma strength and how it varies over time, we have demonstrated the utility of chroma in simplifying the processing and representation of signals dominated by harmonically-related narrowband components. These investigations have utilized an ad hoc procedure for calculating the chromagram from a given time-frequency distribution. The present paper is intended to put this ad hoc procedure on more sound mathematical ground.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton; Hultgren, Lennart S.
2015-01-01
The study of noise from a two-shaft contra-rotating open rotor (CROR) is challenging since the shafts are not phase locked in most cases. Consequently, phase averaging of the acoustic data keyed to a single shaft rotation speed is not meaningful. An unaligned spectrum procedure that was developed to estimate a signal coherence threshold and reveal concealed spectral lines in turbofan engine combustion noise is applied to fan and CROR acoustic data in this paper.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
Asynchronous signal-dependent non-uniform sampler
NASA Astrophysics Data System (ADS)
Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin
2014-05-01
Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.
Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes
NASA Astrophysics Data System (ADS)
Morozov, Yu. V.; Spektor, A. A.
2017-11-01
A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.
NASA Astrophysics Data System (ADS)
Schelkanova, Irina; Toronov, Vladislav
2011-07-01
Although near infrared spectroscopy (NIRS) is now widely used both in emerging clinical techniques and in cognitive neuroscience, the development of the apparatuses and signal processing methods for these applications is still a hot research topic. The main unresolved problem in functional NIRS is the separation of functional signals from the contaminations by systemic and local physiological fluctuations. This problem was approached by using various signal processing methods, including blind signal separation techniques. In particular, principal component analysis (PCA) and independent component analysis (ICA) were applied to the data acquired at the same wavelength and at multiple sites on the human or animal heads during functional activation. These signal processing procedures resulted in a number of principal or independent components that could be attributed to functional activity but their physiological meaning remained unknown. On the other hand, the best physiological specificity is provided by broadband NIRS. Also, a comparison with functional magnetic resonance imaging (fMRI) allows determining the spatial origin of fNIRS signals. In this study we applied PCA and ICA to broadband NIRS data to distill the components correlating with the breath hold activation paradigm and compared them with the simultaneously acquired fMRI signals. Breath holding was used because it generates blood carbon dioxide (CO2) which increases the blood-oxygen-level-dependent (BOLD) signal as CO2 acts as a cerebral vasodilator. Vasodilation causes increased cerebral blood flow which washes deoxyhaemoglobin out of the cerebral capillary bed thus increasing both the cerebral blood volume and oxygenation. Although the original signals were quite diverse, we found very few different components which corresponded to fMRI signals at different locations in the brain and to different physiological chromophores.
Detection and Processing Techniques of FECG Signal for Fetal Monitoring
2009-01-01
Fetal electrocardiogram (FECG) signal contains potentially precise information that could assist clinicians in making more appropriate and timely decisions during labor. The ultimate reason for the interest in FECG signal analysis is in clinical diagnosis and biomedical applications. The extraction and detection of the FECG signal from composite abdominal signals with powerful and advance methodologies are becoming very important requirements in fetal monitoring. The purpose of this review paper is to illustrate the various methodologies and developed algorithms on FECG signal detection and analysis to provide efficient and effective ways of understanding the FECG signal and its nature for fetal monitoring. A comparative study has been carried out to show the performance and accuracy of various methods of FECG signal analysis for fetal monitoring. Finally, this paper further focused some of the hardware implementations using electrical signals for monitoring the fetal heart rate. This paper opens up a passage for researchers, physicians, and end users to advocate an excellent understanding of FECG signal and its analysis procedures for fetal heart rate monitoring system. PMID:19495912
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Alwanni, Hisham; Baslan, Yara; Alnuman, Nasim; Daoud, Mohammad I.
2017-01-01
This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88.8% and 90.2%, respectively, for the subject-dependent training procedure, and 80.8% and 87.8%, respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations. PMID:28832513
NASA Astrophysics Data System (ADS)
Niamsuwan, N.; Johnson, J. T.; Jezek, K. C.; Gogineni, P.
2008-12-01
The Global Ice Sheet Mapping Orbiter (GISMO) mission was developed to address scientific needs to understand the polar ice subsurface structure. This NASA Instrument Incubator Program project is a collaboration between Ohio State University, the University of Kansas, Vexcel Corporation and NASA. The GISMO design utilizes an interferometric SAR (InSAR) strategy in which ice sheet reflected signals received by a dual-antenna system are used to produce an interference pattern. The resulting interferogram can be used to filter out surface clutter so as to reveal the signals scattered from the base of the ice sheet. These signals are further processed to produce 3D-images representing basal topography of the ice sheet. In the past three years, the GISMO airborne field campaigns that have been conducted provide a set of useful data for studying geophysical properties of the Greenland ice sheet. While topography information can be obtained using interferometric SAR processing techniques, ice sheet roughness statistics can also be derived by a relatively simple procedure that involves analyzing power levels and the shape of the radar impulse response waveforms. An electromagnetic scattering model describing GISMO impulse responses has previously been proposed and validated. This model suggested that rms-heights and correlation lengths of the upper surface profile can be determined from the peak power and the decay rate of the pulse return waveform, respectively. This presentation will demonstrate a procedure for estimating the roughness of ice surfaces by fitting the GISMO impulse response model to retrieved waveforms from selected GISMO flights. Furthermore, an extension of this procedure to estimate the scattering coefficient of the glacier bed will be addressed as well. Planned future applications involving the classification of glacier bed conditions based on the derived scattering coefficients will also be described.
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.
1977-01-01
A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.
Protein arginine methylation: Cellular functions and methods of analysis.
Pahlich, Steffen; Zakaryan, Rouzanna P; Gehring, Heinz
2006-12-01
During the last few years, new members of the growing family of protein arginine methyltransferases (PRMTs) have been identified and the role of arginine methylation in manifold cellular processes like signaling, RNA processing, transcription, and subcellular transport has been extensively investigated. In this review, we describe recent methods and findings that have yielded new insights into the cellular functions of arginine-methylated proteins, and we evaluate the currently used procedures for the detection and analysis of arginine methylation.
Random Sequence for Optimal Low-Power Laser Generated Ultrasound
NASA Astrophysics Data System (ADS)
Vangi, D.; Virga, A.; Gulino, M. S.
2017-08-01
Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
NASA Astrophysics Data System (ADS)
Yang, D.-M.; Stronach, A. F.; MacConnell, P.; Penman, J.
2002-03-01
This paper addresses the development of a novel condition monitoring procedure for rolling element bearings which involves a combination of signal processing, signal analysis and artificial intelligence methods. Seven approaches based on power spectrum, bispectral and bicoherence vibration analyses are investigated as signal pre-processing techniques for application in the diagnosis of a number of induction motor rolling element bearing conditions. The bearing conditions considered are a normal bearing and bearings with cage and inner and outer race faults. The vibration analysis methods investigated are based on the power spectrum, the bispectrum, the bicoherence, the bispectrum diagonal slice, the bicoherence diagonal slice, the summed bispectrum and the summed bicoherence. Selected features are extracted from the vibration signatures so obtained and these are used as inputs to an artificial neural network trained to identify the bearing conditions. Quadratic phase coupling (QPC), examined using the magnitude of bispectrum and bicoherence and biphase, is shown to be absent from the bearing system and it is therefore concluded that the structure of the bearing vibration signatures results from inter-modulation effects. In order to test the proposed procedure, experimental data from a bearing test rig are used to develop an example diagnostic system. Results show that the bearing conditions examined can be diagnosed with a high success rate, particularly when using the summed bispectrum signatures.
Multitasking-Pascal extensions solve concurrency problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackie, P.H.
1982-09-29
To avoid deadlock (one process waiting for a resource than another process can't release) and indefinite postponement (one process being continually denied a resource request) in a multitasking-system application, it is possible to use a high-level development language with built-in concurrency handlers. Parallel Pascal is one such language; it extends standard Pascal via special task synchronizers: a new data type called signal, new system procedures called wait and send and a Boolean function termed awaited. To understand the language's use the author examines the problems it helps solve.
Incidental learning of sound categories is impaired in developmental dyslexia.
Gabay, Yafit; Holt, Lori L
2015-12-01
Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia
Gabay, Yafit; Holt, Lori L.
2015-01-01
Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017
LANDSAT-4 to ground station interface description
NASA Technical Reports Server (NTRS)
1983-01-01
The LANDSAT 4 to ground station interface is described in detail. The radiometric specifications, internal calibration, sensor output format, and data processing constants for the multispectral scanner and the thematic mapper are discussed. The mission payload telemetry, onboard computer telemetry, and engineering telemetry formats are described. In addition, the telemetry time signals and the onboard clock resetting procedure are addressed.
1986-06-30
features of computer aided design systems and statistical quality control procedures that are generic to chip sets and processes. RADIATION HARDNESS -The...System PSP Programmable Signal Processor SSI Small Scale Integration ." TOW Tube Launched, Optically Tracked, Wire Guided TTL Transistor Transitor Logic
Optimization and automation of quantitative NMR data extraction.
Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos
2013-06-18
NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.
Leopold, David A.; Hikosaka, Okihide
2015-01-01
It has been suggested that the basal forebrain (BF) exerts strong influences on the formation of memory and behavior. However, what information is used for the memory-behavior formation is unclear. We found that a population of neurons in the medial BF (medial septum and diagonal band of Broca) of macaque monkeys encodes a unique combination of information: reward uncertainty, expected reward value, anticipation of punishment, and unexpected reward and punishment. The results were obtained while the monkeys were expecting (often with uncertainty) a rewarding or punishing outcome during a Pavlovian procedure, or unexpectedly received an outcome outside the procedure. In vivo anterograde tracing using manganese-enhanced MRI suggested that the major recipient of these signals is the intermediate hippocampal formation. Based on these findings, we hypothesize that the medial BF identifies various contexts and outcomes that are critical for memory processing in the hippocampal formation. PMID:25972172
Infrared Sensor-Based Temperature Control for Domestic Induction Cooktops
Lasobras, Javier; Alonso, Rafael; Carretero, Claudio; Carretero, Enrique; Imaz, Eduardo
2014-01-01
In this paper, a precise real-time temperature control system based on infrared (IR) thermometry for domestic induction cooking is presented. The temperature in the vessel constitutes the control variable of the closed-loop power control system implemented in a commercial induction cooker. A proportional-integral controller is applied to establish the output power level in order to reach the target temperature. An optical system and a signal conditioning circuit have been implemented. For the signal processing a microprocessor with 12-bit ADC and a sampling rate of 1 Ksps has been used. The analysis of the contributions to the infrared radiation permits the definition of a procedure to estimate the temperature of the vessel with a maximum temperature error of 5 °C in the range between 60 and 250 °C for a known cookware emissivity. A simple and necessary calibration procedure with a black-body sample is presented. PMID:24638125
Infrared sensor-based temperature control for domestic induction cooktops.
Lasobras, Javier; Alonso, Rafael; Carretero, Claudio; Carretero, Enrique; Imaz, Eduardo
2014-03-14
In this paper, a precise real-time temperature control system based on infrared (IR) thermometry for domestic induction cooking is presented. The temperature in the vessel constitutes the control variable of the closed-loop power control system implemented in a commercial induction cooker. A proportional-integral controller is applied to establish the output power level in order to reach the target temperature. An optical system and a signal conditioning circuit have been implemented. For the signal processing a microprocessor with 12-bit ADC and a sampling rate of 1 Ksps has been used. The analysis of the contributions to the infrared radiation permits the definition of a procedure to estimate the temperature of the vessel with a maximum temperature error of 5 °C in the range between 60 and 250 °C for a known cookware emissivity. A simple and necessary calibration procedure with a black-body sample is presented.
A real-time classification algorithm for EEG-based BCI driven by self-induced emotions.
Iacoviello, Daniela; Petracca, Andrea; Spezialetti, Matteo; Placidi, Giuseppe
2015-12-01
The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. The obtained classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
ATS simultaneous and turnaround ranging experiments
NASA Technical Reports Server (NTRS)
Watson, J. S.; Putney, B. H.
1971-01-01
This report explains the data reduction and spacecraft position determination used in conjunction with two ATS experiments - Trilateration and Turnaround Ranging - and describes in detail a multilateration program that is used for part of the data reduction process. The process described is for the determination of the inertial position of the satellite, and for formating input for related programs. In the trilateration procedure, a geometric determination of satellite position is made from near simultaneous range measurements made by three different tracking stations. Turnaround ranging involves two stations; one, the master station, transmits the signal to the satellite and the satellite retransmits the signal to the slave station which turns the signal around to the satellite which in turn retransmits the signal to the master station. The results of the satellite position computations using the multilateration program are compared to results of other position determination programs used at Goddard. All programs give nearly the same results which indicates that because of its simplicity and computational speed the trilateration technique is useful in obtaining spacecraft positions for near synchronous satellites.
Parametric adaptive filtering and data validation in the bar GW detector AURIGA
NASA Astrophysics Data System (ADS)
Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.
2002-04-01
We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.
NASA Astrophysics Data System (ADS)
Samanta, B.; Al-Balushi, K. R.
2003-03-01
A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.
DOT National Transportation Integrated Search
2007-01-01
This research was undertaken to develop an evaluation procedure to identify high-risk four-legged signalized intersections in VDOT's Northern Virginia district by traffic movements and times of day. By using the developed procedure, traffic engineers...
Adaptive clustering procedure for continuous gravitational wave searches
NASA Astrophysics Data System (ADS)
Singh, Avneet; Papa, Maria Alessandra; Eggenstein, Heinz-Bernd; Walsh, Sinéad
2017-10-01
In hierarchical searches for continuous gravitational waves, clustering of candidates is an important post-processing step because it reduces the number of noise candidates that are followed up at successive stages [J. Aasi et al., Phys. Rev. Lett. 88, 102002 (2013), 10.1103/PhysRevD.88.102002; B. Behnke, M. A. Papa, and R. Prix, Phys. Rev. D 91, 064007 (2015), 10.1103/PhysRevD.91.064007; M. A. Papa et al., Phys. Rev. D 94, 122006 (2016), 10.1103/PhysRevD.94.122006]. Previous clustering procedures bundled together nearby candidates ascribing them to the same root cause (be it a signal or a disturbance), based on a predefined cluster volume. In this paper, we present a procedure that adapts the cluster volume to the data itself and checks for consistency of such volume with what is expected from a signal. This significantly improves the noise rejection capabilities at fixed detection threshold, and at fixed computing resources for the follow-up stages, this results in an overall more sensitive search. This new procedure was employed in the first Einstein@Home search on data from the first science run of the advanced LIGO detectors (O1) [LIGO Scientific Collaboration and Virgo Collaboration, arXiv:1707.02669 [Phys. Rev. D (to be published)
Goense, J B M; Ratnam, R
2003-10-01
An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.
2012-08-01
It suggests that a smart use of some a-priori information about the operating environment, when processing the received signal and designing the...random variable with the same variance of the backscattering target amplitude αT , and D ( αT , α G T ) is the Kullback − Leibler divergence, see [65...MI . Proof. See Appendix 3.6.6. Thus, we can use the optimization procedure of Algorithm 4 to optimize the Mutual Information between the target
How Chemical Synthesis of Ubiquitin Conjugates Helps To Understand Ubiquitin Signal Transduction.
Hameed, Dharjath S; Sapmaz, Aysegul; Ovaa, Huib
2017-03-15
Ubiquitin (Ub) is a small post-translational modifier protein involved in a myriad of biochemical processes including DNA damage repair, proteasomal proteolysis, and cell cycle control. Ubiquitin signaling pathways have not been completely deciphered due to the complex nature of the enzymes involved in ubiquitin conjugation and deconjugation. Hence, probes and assay reagents are important to get a better understanding of this pathway. Recently, improvements have been made in synthesis procedures of Ub derivatives. In this perspective, we explain various research reagents available and how chemical synthesis has made an important contribution to Ub research.
Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.
Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond
2005-09-01
Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.
Optical sensor for real-time weld defect detection
NASA Astrophysics Data System (ADS)
Ancona, Antonio; Maggipinto, Tommaso; Spagnolo, Vincenzo; Ferrara, Michele; Lugara, Pietro M.
2002-04-01
In this work we present an innovative optical sensor for on- line and non-intrusive welding process monitoring. It is based on the spectroscopic analysis of the optical VIS emission of the welding plasma plume generated in the laser- metal interaction zone. Plasma electron temperature has been measured for different chemical species composing the plume. Temperature signal evolution has been recorded and analyzed during several CO2-laser welding processes, under variable operating conditions. We have developed a suitable software able to real time detect a wide range of weld defects like crater formation, lack of fusion, excessive penetration, seam oxidation. The same spectroscopic approach has been applied for electric arc welding process monitoring. We assembled our optical sensor in a torch for manual Gas Tungsten Arc Welding procedures and tested the prototype in a manufacturing industry production line. Even in this case we found a clear correlation between the signal behavior and the welded joint quality.
Minimal Network Topologies for Signal Processing during Collective Cell Chemotaxis.
Yue, Haicen; Camley, Brian A; Rappel, Wouter-Jan
2018-06-19
Cell-cell communication plays an important role in collective cell migration. However, it remains unclear how cells in a group cooperatively process external signals to determine the group's direction of motion. Although the topology of signaling pathways is vitally important in single-cell chemotaxis, the signaling topology for collective chemotaxis has not been systematically studied. Here, we combine mathematical analysis and simulations to find minimal network topologies for multicellular signal processing in collective chemotaxis. We focus on border cell cluster chemotaxis in the Drosophila egg chamber, in which responses to several experimental perturbations of the signaling network are known. Our minimal signaling network includes only four elements: a chemoattractant, the protein Rac (indicating cell activation), cell protrusion, and a hypothesized global factor responsible for cell-cell interaction. Experimental data on cell protrusion statistics allows us to systematically narrow the number of possible topologies from more than 40,000,000 to only six minimal topologies with six interactions between the four elements. This analysis does not require a specific functional form of the interactions, and only qualitative features are needed; it is thus robust to many modeling choices. Simulations of a stochastic biochemical model of border cell chemotaxis show that the qualitative selection procedure accurately determines which topologies are consistent with the experiment. We fit our model for all six proposed topologies; each produces results that are consistent with all experimentally available data. Finally, we suggest experiments to further discriminate possible pathway topologies. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien
2010-04-23
This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.
46 CFR 160.066-11 - Approval procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Distress Signal for Boats, Red Aerial Pyrotechnic Flare § 160.066-11 Approval procedures. (a) Red aerial pyrotechnic flare distress signals are approved under the...
46 CFR 160.066-11 - Approval procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Distress Signal for Boats, Red Aerial Pyrotechnic Flare § 160.066-11 Approval procedures. (a) Red aerial pyrotechnic flare distress signals are approved under the...
Some Memories are Odder than Others: Judgments of Episodic Oddity Violate Known Decision Rules
O’Connor, Akira R.; Guhl, Emily N.; Cox, Justin C.; Dobbins, Ian G.
2011-01-01
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment of episodic oddity, whereby participants select the mnemonically odd members of triplets (e.g., a new item hidden among two studied items). Using the only two known signal-detection rules of oddity judgment derived from the sensory perception literature, the unequal variance signal-detection model predicted that an old item among two new items would be easier to discover than a new item among two old items. In contrast, four separate empirical studies demonstrated the reverse pattern: triplets with two old items were the easiest to resolve. This finding was anticipated by the dual-process approach as the presence of two old items affords the greatest opportunity for recollection. Furthermore, a bootstrap-fed Monte Carlo procedure using two independent datasets demonstrated that the dual-process parameters typically observed during single item recognition correctly predict the current oddity findings, whereas unequal variance signal-detection parameters do not. Episodic oddity judgments represent a case where dual- and single-process predictions qualitatively diverge and the findings demonstrate that novelty is “odder” than familiarity. PMID:22833695
1993-12-01
72 D. MINES AND THE MILITARY-TECHNOLOGICAL REVOLUTION ...................................... 74 E. CUSTOMIZING THE TDD PROLIFERATION MARKET M...Data Storage & Peripherals - Systems Managmnt Technologies 4. Passive Sensors - Sensors and Signal Processing 5. Photonics - Electronic and...a reproducible procedure to allow customization of the model, provides the "guts" of the method. 18 Third, because they are not optimized for
Optical scanning tests of complex CMOS microcircuits
NASA Technical Reports Server (NTRS)
Levy, M. E.; Erickson, J. J.
1977-01-01
The new test method was based on the use of a raster-scanned optical stimulus in combination with special electrical test procedures. The raster-scanned optical stimulus was provided by an optical spot scanner, an instrument that combines a scanning optical microscope with electronic instrumentation to process and display the electric photoresponse signal induced in a device that is being tested.
Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic
NASA Astrophysics Data System (ADS)
Hata, Yutaka
2010-04-01
First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull sonography system that can visualize the shape of the skull and brain surface from any point to examine skull fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are successfully determined.
Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy
Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.
2010-01-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667
Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.
Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C
2010-12-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.
Performance assessment of a data processing chain for THz imaging
NASA Astrophysics Data System (ADS)
Catapano, Ilaria; Ludeno, Giovanni; Soldovieri, Francesco
2017-04-01
Nowadays, TeraHertz (THz) imaging is deserving huge attention as very high resolution diagnostic tool in many applicative fields, among which security, cultural heritage, material characterization and civil engineering diagnostics. This widespread use of THz waves is due to their non-ionizing nature, their capability of penetrating into non-metallic opaque materials, as well as to the technological advances, which have allowed the commercialization of compact, flexible and portable systems. However, the effectiveness of THz imaging depends strongly on the adopted data processing aimed at improving the imaging performance of the hardware device. In particular, data processing is required to mitigate detrimental and unavoidable effects like noise, signal attenuation, as well as to correct the sample surface topography. With respect to data processing, we have proposed recently a strategy involving three different steps aimed at reducing noise, filtering out undesired signal introduced by the adopted THz system and performing surface topography correction [1]. The first step regards noise filtering and exploits a procedure based on the Singular Value Decomposition (SVD) [2] of the data matrix, which does not require knowledge of noise level and it does not involve the use of a reference signal. The second step aims at removing the undesired signal that we have experienced to be introduced by the adopted Z-Omega Fiber-Coupled Terahertz Time Domain (FICO) system. Indeed, when the system works in a high-speed mode, an undesired low amplitude peak occurs always at the same time instant from the beginning of the observation time window and needs to be removed from the useful data matrix in order to avoid a wrong interpretation of the imaging results. The third step of the considered data processing chain is a topographic correction, which needs in order to image properly the samples surface and its inner structure. Such a procedure performs an automatic alignment of the first peak of the measured waveforms by exploiting the a-priori information on the focus distance at which the specimen under test must be located during the measurement phase. The usefulness of the proposed data processing chain has been widely assessed in the last few months by surveying several specimens made by different materials and representative of objects of interest for civil engineering and cultural heritage diagnostics. At the conference, we will show in detail the signal processing chain and present several achieved results. REFERENCES [1] I. Catapano, F. Soldovieri, "A Data Processing Chain for Terahertz Imaging and Its Use in Artwork Diagnostics". J Infrared Milli Terahz Waves, pp.13, Nov. 2016. [2] M. Bertero and P. Boccacci (1998), Introduction to Inverse Problems in Imaging, Bristol: Institute of Physics Publishing.
Progress on automated data analysis algorithms for ultrasonic inspection of composites
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2015-03-01
Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-12-01
The number of motor unit action potentials (MUAPs) appearing in the surface electromyogram (EMG) signal is directly related to motor unit recruitment and firing rates and therefore offers potentially valuable information about the level of activation of the motoneuron pool. In this paper, based on morphological features of the surface MUAPs, we try to estimate the number of MUAPs present in the surface EMG by counting the negative peaks in the signal. Several signal processing procedures are applied to the surface EMG to facilitate this peak counting process. The MUAP number estimation performance by this approach is first illustrated using the surface EMG simulations. Then, by evaluating the peak counting results from the EMG records detected by a very selective surface electrode, at different contraction levels of the first dorsal interosseous (FDI) muscles, the utility and limitations of such direct peak counts for MUAP number estimation in surface EMG are further explored.
Model-based damage evaluation of layered CFRP structures
NASA Astrophysics Data System (ADS)
Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.
2015-03-01
An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.
Pietrogrande, Maria Chiara; Basaglia, Giulia; Dondi, Francesco
2009-05-01
This paper discusses the development of a comprehensive method for the simultaneous analysis of personal care products (PCPs) based on SPE and GC-MS. The method was developed on 29 target compounds to represent PCPs belonging to different chemical classes: surfactants in detergents (alkyl benzenes), fragrances in cosmetics (nitro and polycyclic musks), antioxidants and preservatives (phenols), plasticizers (phthalates) displaying a wide range of volatility, polarity, water solubility. In addition to the conventional C(18) stationary phase, a surface modified styrene divinylbenzene polymeric phase (Strata X SPE cartridge) has been investigated as suitable for the simultaneous extraction of several PCPs with polar and non-polar characteristics. For both sorbents different solvent compositions and eluting conditions were tested and compared in order to achieve high extraction efficiency for as many sample components as possible. Comparison of the behavior of the two cartridges reveals that, overall, Strata-X provides better efficiency with extraction recovery higher than 70% for most of the PCPs investigated. The best results were obtained under the following operative conditions: an evaporation temperature of 40 degrees C, elution on Strata-X cartridge using a volume of 15 mL of ethyl acetate (EA) as solvent and operating with slow flow rate (-10 KPa). In addition to the conventional method based on peak integration, a chemometric approach based on the computation of the experimental autocovariance function (EACVF(tot)) was applied to the complex GC-MS signal: the percentage recovery and information on peak abundance distribution can be evaluated for each procedure step. The PC-based signal processing proved very helpful in assisting the development of the analytical procedure, since it saves labor and time and increases result reliability in handling GC complex signals.
EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data
NASA Astrophysics Data System (ADS)
D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina
2016-02-01
In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.
Errors in the estimation method for the rejection of vibrations in adaptive optics systems
NASA Astrophysics Data System (ADS)
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
NASA Astrophysics Data System (ADS)
Humphreys, Kenneth; Ward, Tomas; Markham, Charles
2007-04-01
We present a camera-based device capable of capturing two photoplethysmographic (PPG) signals at two different wavelengths simultaneously, in a remote noncontact manner. The system comprises a complementary metal-oxide semiconductor camera and dual wavelength array of light emitting diodes (760 and 880nm). By alternately illuminating a region of tissue with each wavelength of light, and detecting the backscattered photons with the camera at a rate of 16frames/wavelengths, two multiplexed PPG wave forms are simultaneously captured. This process is the basis of pulse oximetry, and we describe how, with the inclusion of a calibration procedure, this system could be used as a noncontact pulse oximeter to measure arterial oxygen saturation (SpO2) remotely. Results from an experiment on ten subjects, exhibiting normal SpO2 readings, that demonstrate the instrument's ability to capture signals from a range of subjects under realistic lighting and environmental conditions are presented. We compare the signals captured by the noncontact system to a conventional PPG signal captured concurrently from a finger, and show by means of a J. Bland and D. Altman [Lancet 327, 307 (1986); Statistician 32, 307 (1983)] test, the noncontact device to be comparable to a contact device as a monitor of heart rate. We highlight some considerations that should be made when using camera-based "integrative" sampling methods and demonstrate through simulation, the suitability of the captured PPG signals for application of existing pulse oximetry calibration procedures.
Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance
NASA Astrophysics Data System (ADS)
Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao
2016-01-01
Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.
SNIa detection in the SNLS photometric analysis using Morphological Component Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.
2015-04-01
Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000more » detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.« less
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surgical Force-Measuring Probe
NASA Technical Reports Server (NTRS)
Tcheng, Ping; Roberts, Paul W.; Scott, Charles E.
1993-01-01
Aerodynamic balance adapted to medical use. Electromechanical probe measures forces and moments applied to human tissue during surgery. Measurements used to document optimum forces and moments for surgical research and training. In neurosurgical research, measurements correlated with monitored responses of nerves. In training, students learn procedures by emulating forces used by experienced surgeons. Lightweight, pen-shaped probe easily held by surgeon. Cable feeds output signals to processing circuitry.
Statistical Properties of a Two-Stage Procedure for Creating Sky Flats
NASA Astrophysics Data System (ADS)
Crawford, R. W.; Trueblood, M.
2004-05-01
Accurate flat fielding is an essential factor in image calibration and good photometry, yet no single method for creating flat fields is both practical and effective in all cases. At Winer Observatory, robotic telescope opera- tion and the research program of Near Earth Object follow-up astrometry favor the use of sky flats formed from the many images that are acquired during a night. This paper reviews the statistical properties of the median-combine process used to create sky flats and discusses a computationally efficient procedure for two-stage combining of many images to form sky flats with relatively high signal-to-noise ratio (SNR). This procedure is in use at Winer for the flat field calibration of unfiltered images taken for NEO follow-up astrometry.
EFQPSK Versus CERN: A Comparative Study
NASA Technical Reports Server (NTRS)
Borah, Deva K.; Horan, Stephen
2001-01-01
This report presents a comparative study on Enhanced Feher's Quadrature Phase Shift Keying (EFQPSK) and Constrained Envelope Root Nyquist (CERN) techniques. These two techniques have been developed in recent times to provide high spectral and power efficiencies under nonlinear amplifier environment. The purpose of this study is to gain insights into these techniques and to help system planners and designers with an appropriate set of guidelines for using these techniques. The comparative study presented in this report relies on effective simulation models and procedures. Therefore, a significant part of this report is devoted to understanding the mathematical and simulation models of the techniques and their set-up procedures. In particular, mathematical models of EFQPSK and CERN, effects of the sampling rate in discrete time signal representation, and modeling of nonlinear amplifiers and predistorters have been considered in detail. The results of this study show that both EFQPSK and CERN signals provide spectrally efficient communications compared to filtered conventional linear modulation techniques when a nonlinear power amplifier is used. However, there are important differences. The spectral efficiency of CERN signals, with a small amount of input backoff, is significantly better than that of EFQPSK signals if the nonlinear amplifier is an ideal clipper. However, to achieve such spectral efficiencies with a practical nonlinear amplifier, CERN processing requires a predistorter which effectively translates the amplifier's characteristics close to those of an ideal clipper. Thus, the spectral performance of CERN signals strongly depends on the predistorter. EFQPSK signals, on the other hand, do not need such predistorters since their spectra are almost unaffected by the nonlinear amplifier, Ibis report discusses several receiver structures for EFQPSK signals. It is observed that optimal receiver structures can be realized for both coded and uncoded EFQPSK signals with not too much increase in computational complexity. When a nonlinear amplifier is used, the bit error rate (BER) performance of the CERN signals with a matched filter receiver is found to be more than one decibel (dB) worse compared to the bit error performance of EFQPSK signals. Although channel coding is found to provide BER performance improvement for both EFQPSK and CERN signals, the performance of EFQPSK signals remains better than that of CERN. Optimal receiver structures for CERN signals with nonlinear equalization is left as a possible future work. Based on the numerical results, it is concluded that, in nonlinear channels, CERN processing leads towards better bandwidth efficiency with a compromise in power efficiency. Hence for bandwidth efficient communications needs, CERN is a good solution provided effective adaptive predistorters can be realized. On the other hand, EFQPSK signals provide a good power efficient solution with a compromise in band width efficiency.
Signal detection via residence-time asymmetry in noisy bistable devices.
Bulsara, A R; Seberino, C; Gammaitoni, L; Karlsson, M F; Lundqvist, B; Robinson, J W C
2003-01-01
We introduce a dynamical readout description for a wide class of nonlinear dynamic sensors operating in a noisy environment. The presence of weak unknown signals is assessed via the monitoring of the residence time in the metastable attractors of the system, in the presence of a known, usually time-periodic, bias signal. This operational scenario can mitigate the effects of sensor noise, providing a greatly simplified readout scheme, as well as significantly reduced processing procedures. Such devices can also show a wide variety of interesting dynamical features. This scheme for quantifying the response of a nonlinear dynamic device has been implemented in experiments involving a simple laboratory version of a fluxgate magnetometer. We present the results of the experiments and demonstrate that they match the theoretical predictions reasonably well.
New coherent laser communication detection scheme based on channel-switching method.
Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren
2015-04-01
A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.
Signalling maps in cancer research: construction and data analysis
Kondratova, Maria; Sompairac, Nicolas; Barillot, Emmanuel; Zinovyev, Andrei
2018-01-01
Abstract Generation and usage of high-quality molecular signalling network maps can be augmented by standardizing notations, establishing curation workflows and application of computational biology methods to exploit the knowledge contained in the maps. In this manuscript, we summarize the major aims and challenges of assembling information in the form of comprehensive maps of molecular interactions. Mainly, we share our experience gained while creating the Atlas of Cancer Signalling Network. In the step-by-step procedure, we describe the map construction process and suggest solutions for map complexity management by introducing a hierarchical modular map structure. In addition, we describe the NaviCell platform, a computational technology using Google Maps API to explore comprehensive molecular maps similar to geographical maps and explain the advantages of semantic zooming principles for map navigation. We also provide the outline to prepare signalling network maps for navigation using the NaviCell platform. Finally, several examples of cancer high-throughput data analysis and visualization in the context of comprehensive signalling maps are presented. PMID:29688383
NASA Technical Reports Server (NTRS)
Houseman, J.; Cerini, D. J. (Inventor)
1976-01-01
A process and apparatus are described for producing hydrogen-rich product gases. A spray of liquid hydrocarbon is mixed with a stream of air in a startup procedure and the mixture is ignited for partial oxidation. The stream of air is then heated by the resulting combustion to reach a temperature such that a signal is produced. The signal triggers a two way valve which directs liquid hydrocarbon from a spraying mechanism to a vaporizing mechanism with which a vaporized hydrocarbon is formed. The vaporized hydrocarbon is subsequently mixed with the heated air in the combustion chamber where partial oxidation takes place and hydrogen-rich product gases are produced.
Tracking signal test to monitor an intelligent time series forecasting model
NASA Astrophysics Data System (ADS)
Deng, Yan; Jaraiedi, Majid; Iskander, Wafik H.
2004-03-01
Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.
Non-Invasive UWB Sensing of Astronauts' Breathing Activity
Baldi, Marco; Cerri, Graziano; Chiaraluce, Franco; Eusebi, Lorenzo; Russo, Paola
2015-01-01
The use of a UWB system for sensing breathing activity of astronauts must account for many critical issues specific to the space environment. The aim of this paper is twofold. The first concerns the definition of design constraints about the pulse amplitude and waveform to transmit, as well as the immunity requirements of the receiver. The second issue concerns the assessment of the procedures and the characteristics of the algorithms to use for signal processing to retrieve the breathing frequency and respiration waveform. The algorithm has to work correctly in the presence of surrounding electromagnetic noise due to other sources in the environment. The highly reflecting walls increase the difficulty of the problem and the hostile scenario has to be accurately characterized. Examples of signal processing techniques able to recover breathing frequency in significant and realistic situations are shown and discussed. PMID:25558995
Forward and correctional OFDM-based visible light positioning
NASA Astrophysics Data System (ADS)
Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng
2017-09-01
Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.
Inverse Modelling to Obtain Head Movement Controller Signal
NASA Technical Reports Server (NTRS)
Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.
1984-01-01
Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
NASA Astrophysics Data System (ADS)
Rambaldi, Marcello; Filimonov, Vladimir; Lillo, Fabrizio
2018-03-01
Given a stationary point process, an intensity burst is defined as a short time period during which the number of counts is larger than the typical count rate. It might signal a local nonstationarity or the presence of an external perturbation to the system. In this paper we propose a procedure for the detection of intensity bursts within the Hawkes process framework. By using a model selection scheme we show that our procedure can be used to detect intensity bursts when both their occurrence time and their total number is unknown. Moreover, the initial time of the burst can be determined with a precision given by the typical interevent time. We apply our methodology to the midprice change in foreign exchange (FX) markets showing that these bursts are frequent and that only a relatively small fraction is associated with news arrival. We show lead-lag relations in intensity burst occurrence across different FX rates and we discuss their relation with price jumps.
Jin, Yulong; Huang, Yanyan; Xie, Yunfeng; Hu, Wenbing; Wang, Fuyi; Liu, Guoquan; Zhao, Rui
2012-01-30
The cyclic oxidation and reduction of methionine (Met) containing peptides and proteins play important roles in biological system. This work was contributed to analysis the cyclic oxidation and reduction processes of a methionine containing peptide which is very likely to relate in the cell signal transduction pathways. To mimic the biological oxidation condition, hydrogen peroxide was used as the reactive oxygen species to oxidize the peptide. Reversed-phase high-performance liquid chromatography and mass spectrometry were employed to monitor the reactions and characterize the structural changes of the products. A rapid reduction procedure was developed by simply using KI as the reductant, which is green and highly efficient. By investigation of the cyclic oxidation and reduction process, our work provides a new perspective to study the function and mechanism of Met containing peptides and proteins during cell signaling processes as well as diseases. Copyright © 2011 Elsevier B.V. All rights reserved.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Inflight IFR procedures simulator
NASA Technical Reports Server (NTRS)
Parker, L. C. (Inventor)
1984-01-01
An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.
Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R; Murshudov, Garib N; Short, Judith M; Scheres, Sjors H W; Henderson, Richard
2013-12-01
Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an unbiased FSC from the two curves, even when a substantial amount of overfitting is present. The approach is software independent. The user is therefore completely free to use any established method or novel combination of methods, provided the HR-noise test is carried out in parallel. Applying this procedure to cryoEM images of beta-galactosidase shows how overfitting varies greatly depending on the procedure, but in the best case shows no overfitting and a resolution of ~6 Å. (382 words). © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Auditory phase and frequency discrimination: a comparison of nine procedures.
Creelman, C D; Macmillan, N A
1979-02-01
Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.
Karayiannis, Nicolaos B; Sami, Abdul; Frost, James D; Wise, Merrill S; Mizrahi, Eli M
2005-04-01
This paper presents an automated procedure developed to extract quantitative information from video recordings of neonatal seizures in the form of motor activity signals. This procedure relies on optical flow computation to select anatomical sites located on the infants' body parts. Motor activity signals are extracted by tracking selected anatomical sites during the seizure using adaptive block matching. A block of pixels is tracked throughout a sequence of frames by searching for the most similar block of pixels in subsequent frames; this search is facilitated by employing various update strategies to account for the changing appearance of the block. The proposed procedure is used to extract temporal motor activity signals from video recordings of neonatal seizures and other events not associated with seizures.
Experimental evaluation of tool run-out in micro milling
NASA Astrophysics Data System (ADS)
Attanasio, Aldo; Ceretti, Elisabetta
2018-05-01
This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
Quantum neural network-based EEG filtering for a brain-computer interface.
Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin
2014-02-01
A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.
Optimized suppression of coherent noise from seismic data using the Karhunen-Loève transform
NASA Astrophysics Data System (ADS)
Montagne, Raúl; Vasconcelos, Giovani L.
2006-07-01
Signals obtained in land seismic surveys are usually contaminated with coherent noise, among which the ground roll (Rayleigh surface waves) is of major concern for it can severely degrade the quality of the information obtained from the seismic record. This paper presents an optimized filter based on the Karhunen-Loève transform for processing seismic images contaminated with ground roll. In this method, the contaminated region of the seismic record, to be processed by the filter, is selected in such way as to correspond to the maximum of a properly defined coherence index. The main advantages of the method are that the ground roll is suppressed with negligible distortion of the remnant reflection signals and that the filtering procedure can be automated. The image processing technique described in this study should also be relevant for other applications where coherent structures embedded in a complex spatiotemporal pattern need to be identified in a more refined way. In particular, it is argued that the method is appropriate for processing optical coherence tomography images whose quality is often degraded by coherent noise (speckle).
Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.
Azami, Hamed; Escudero, Javier
2016-05-01
Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the formulation of PE. The AAPE algorithm can be used in almost every irregularity-based application in various signal and image processing fields. We also made freely available the Matlab code of the AAPE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Yasui, Yutaka; McLerran, Dale; Adam, Bao-Ling; Winget, Marcy; Thornquist, Mark; Feng, Ziding
2003-01-01
Discovery of "signature" protein profiles that distinguish disease states (eg, malignant, benign, and normal) is a key step towards translating recent advancements in proteomic technologies into clinical utilities. Protein data generated from mass spectrometers are, however, large in size and have complex features due to complexities in both biological specimens and interfering biochemical/physical processes of the measurement procedure. Making sense out of such high-dimensional complex data is challenging and necessitates the use of a systematic data analytic strategy. We propose here a data processing strategy for two major issues in the analysis of such mass-spectrometry-generated proteomic data: (1) separation of protein "signals" from background "noise" in protein intensity measurements and (2) calibration of protein mass/charge measurements across samples. We illustrate the two issues and the utility of the proposed strategy using data from a prostate cancer biomarker discovery project as an example.
Vectorized Rebinning Algorithm for Fast Data Down-Sampling
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Jeffrey
2013-01-01
A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.
Analog computation of auto and cross-correlation functions
NASA Technical Reports Server (NTRS)
1974-01-01
For analysis of the data obtained from the cross beam systems it was deemed desirable to compute the auto- and cross-correlation functions by both digital and analog methods to provide a cross-check of the analysis methods and an indication as to which of the two methods would be most suitable for routine use in the analysis of such data. It is the purpose of this appendix to provide a concise description of the equipment and procedures used for the electronic analog analysis of the cross beam data. A block diagram showing the signal processing and computation set-up used for most of the analog data analysis is provided. The data obtained at the field test sites were recorded on magnetic tape using wide-band FM recording techniques. The data as recorded were band-pass filtered by electronic signal processing in the data acquisition systems.
Reversing the picture superiority effect: a speed-accuracy trade-off study of recognition memory.
Boldini, Angela; Russo, Riccardo; Punia, Sahiba; Avons, S E
2007-01-01
Speed-accuracy trade-off methods have been used to contrast single- and dual-process accounts of recognition memory. With these procedures, subjects are presented with individual test items and required to make recognition decisions under various time constraints. In three experiments, we presented words and pictures to be intentionally learned; test stimuli were always visually presented words. At test, we manipulated the interval between the presentation of each test stimulus and that of a response signal, thus controlling the amount of time available to retrieve target information. The standard picture superiority effect was significant in long response deadline conditions (i.e., > or = 2,000 msec). Conversely, a significant reverse picture superiority effect emerged at short response-signal deadlines (< 200 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory. Alternative accounts are also discussed.
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
Bonino, Angela Yarnell; Leibold, Lori J
2017-01-23
Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.
The relationship of acquisition systems to automated stereo correlation.
Colvocoresses, A.P.
1983-01-01
Today a concerted effort is being made to expedite the mapping process through automated correlation of stereo data. Stereo correlation involves the comparison of radiance (brightness) signals or patterns recorded by sensors. Conventionally, two-dimensional area correlation is utilized but this is a rather slow and cumbersome procedure. Digital correlation can be performed in only one dimension where suitable signal patterns exist, and the one-dimensional mode is much faster. Electro-optical (EO) systems, suitable for space use, also have much greater flexibility than film systems. Thus, an EO space system can be designed which will optimize one-dimensional stereo correlation and lead toward the automation of topographic mapping.-from Author
Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data
Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Azevedo Coste, Christine
2015-01-01
This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject’s foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15% under the various walking conditions. PMID:26703622
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
Ultramicrowave communications system, phase 2
NASA Technical Reports Server (NTRS)
1980-01-01
Communications system design was completed and reviewed. Minor changes were made in order to make it more cost effective and to increase design flexibility. System design activities identified the techniques and procedures to generate and monitor high data rate test signals. Differential bi-phase demodulation is the proposed method for this system. The mockup and packaging designs were performed, and component layout and interconnection constraints were determined, as well as design drawings for dummy parts of the system. The possibility of adding a low cost option to the transceiver system was studied. The communications program has the advantage that new technology signal processing devices can be readily interfaced with the existing radio frequency subsystem to produce a short range radar.
Kwon, Bomjun J
2012-06-01
This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community-both those with programming backgrounds and those without.
Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.
Chen, Ying-Xu; Huang, Ke-Jing; Lin, Feng; Fang, Lin-Xia
2017-12-01
In this work, a sensitive, universal and reusable electrochemical biosensor based on stannic oxide nanocorals-graphene hybrids (SnO 2 NCs-Gr) is developed for target DNA detection by using two kinds of DNA enzymes for signal amplification through an autonomous cascade DNA duplication strategy. A hairpin probe is designed composing of a projecting part at the 3'-end as identification sequence for target, a recognition site for nicking endonuclease, and an 18-carbon shim to stop polymerization process. The designed DNA duplication-incision-replacement process is handled by KF polymerase and endonuclease, then combining with gold nanoparticles as signal carrier for further signal amplification. In the detection system, the electrochemical-chemical-chemical procedure, which uses ferrocene methanol, tris(2-carboxyethyl)phosphine and l-ascorbic acid 2-phosphate as oxidoreduction neurogen, deoxidizer and zymolyte, separately, is applied to amplify detection signal. Benefiting from the multiple signal amplification mechanism, the proposed sensor reveals a good linear connection between the peak current and logarithm of analyte concentration in range of 0.0001-1 × 10 -11 molL -1 with a detection limit of 1.25 × 10 -17 molL -1 (S/N=3). This assay also opens one promising strategy for ultrasensitive determination of other biological molecules for bioanalysis and biomedicine diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Villiger, Arturo; Schaer, Stefan; Dach, Rolf; Prange, Lars; Jäggi, Adrian
2017-04-01
It is common to handle code biases in the Global Navigation Satellite System (GNSS) data analysis as conventional differential code biases (DCBs): P1-C1, P1-P2, and P2-C2. Due to the increasing number of signals and systems in conjunction with various tracking modes for the different signals (as defined in RINEX3 format), the number of DCBs would increase drastically and the bookkeeping becomes almost unbearable. The Center for Orbit Determination in Europe (CODE) has thus changed its processing scheme to observable-specific signal biases (OSB). This means that for each observation involved all related satellite and receiver biases are considered. The OSB contributions from various ionosphere analyses (geometry-free linear combination) using different observables and frequencies and from clock analyses (ionosphere-free linear combination) are then combined on normal equation level. By this, one consistent set of OSB values per satellite and receiver can be obtained that contains all information needed for GNSS-related processing. This advanced procedure of code bias handling is now also applied to the IGS (International GNSS Service) MGEX (Multi-GNSS Experiment) procedure at CODE. Results for the biases from the legacy IGS solution as well as the CODE MGEX processing (considering GPS, GLONASS, Galileo, BeiDou, and QZSS) are presented. The consistency with the traditional method is confirmed and the new results are discussed regarding the long-term stability. When processing code data, it is essential to know the true observable types in order to correct for the associated biases. CODE has been verifying the receiver tracking technologies for GPS based on estimated DCB multipliers (for the RINEX 2 case). With the change to OSB, the original verification approach was extended to search for the best fitting observable types based on known OSB values. In essence, a multiplier parameter is estimated for each involved GNSS observable type. This implies that we could recover, for receivers tracking a combination of signals, even the factors of these combinations. The verification of the observable types is crucial to identify the correct observable types of RINEX 2 data (which does not contain the signal modulation in comparison to RINEX 3). The correct information of the used observable types is essential for precise point positioning (PPP) applications and GNSS ambiguity resolution. Multi-GNSS OSBs and verified receiver tracking modes are essential to get best possible multi-GNSS solutions for geodynamic purposes and other applications.
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
Sensory Information Processing
1977-04-01
deblurred image is shown in Figure lUb. This result, with no sensor noise , shows a good representation of the original double star. The orientation of the...which we Page 21 performed to test the theory and to provide an indication of the effects of sensor noise on the performances of these procedures...34^-^^^^-^-^ Page 37 2 Labeyrie has shown experimentally that<|S(u)| > has useful signal-to- noise ratio out to the diffraction limit of the telescope. Korff
Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Masoudi, A.; Newson, T. P.
2017-04-01
A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.
NASA Technical Reports Server (NTRS)
Ma, Y.
1995-01-01
The AMSU-A receiver subsystem comprises two separated receiver assemblies; AMSU-A1 and AMSU-A2 (P/N 1356441-1). The AMSU-A1 receiver contains 13 channels and the AMSU-A2 receiver 2 channels. The AMSU-A1 receiver assembly is further divided into two parts; AMSU-A1-1 (P/N 1356429-1) and AMSU-A1-2 (P/N 1356409-1), which contain 9 and 4 channels, respectively. The receiver assemblies are highlighted and illustrate the functional block diagrams of the AMSU-A1 and AMSU-A2 systems. The AMSU-A receiver subsystem stands in between the antenna and signal processing subsystems of the AMSU-A instrument and comprises the RF and IF components from isolators to attenuators. It receives the RF signals from the antenna subsystem, down-converts the RF signals to IF signals, amplifies and defines the IF signals to proper power level and frequency bandwidth as specified for each channel, and inputs the IF signals to the signal processing subsystem. This test report presents the test data of the EOS AMSU-A Flight Model No. 1 (FM-1) receiver subsystem. The tests are performed per the Acceptance Test Procedure for the AMSU-A Receiver Subsystem, AE-26002/6A. The functional performance tests are conducted either at the component or subsystem level. While the component-level tests are performed over the entire operating temperature range predicted by thermal analysis, the subsystem-level tests are conducted at ambient temperature only.
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
Myznikov, I L; Nabokov, N L; Rogovanov, D Yu; Khankevich, Yu R
2016-01-01
The paper proposes to apply the informational modeling of correlation matrix developed by I.L. Myznikov in early 1990s in neurophysiological investigations, such as electroencephalogram recording and analysis, coherence description of signals from electrodes on the head surface. The authors demonstrate information models built using the data from studies of inert gas inhalation by healthy human subjects. In the opinion of the authors, information models provide an opportunity to describe physiological processes with a high level of generalization. The procedure of presenting the EEG results holds great promise for the broad application.
Purdon, Patrick L.; Millan, Hernan; Fuller, Peter L.; Bonmassar, Giorgio
2008-01-01
Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open source system for simultaneous electrophysiology and fMRI featuring low-noise (< 0.6 uV p-p input noise), electromagnetic compatibility for MRI (tested up to 7 Tesla), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has used in human EEG/fMRI studies at 3 and 7 Tesla examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3 Tesla fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level. PMID:18761038
Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio
2008-11-15
Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (<0.6microV p-p input noise), electromagnetic compatibility for MRI (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.
STATs: An Old Story, Yet Mesmerizing.
Abroun, Saeid; Saki, Najmaldin; Ahmadvand, Mohammad; Asghari, Farahnaz; Salari, Fatemeh; Rahim, Fakher
2015-01-01
Signal transducers and activators of transcription (STATs) are cytoplasmic transcription factors that have a key role in cell fate. STATs, a protein family comprised of seven members, are proteins which are latent cytoplasmic transcription factors that convey signals from the cell surface to the nucleus through activation by cytokines and growth factors. The signaling pathways have diverse biological functions that include roles in cell differentiation, proliferation, development, apoptosis, and inflammation which place them at the center of a very active area of research. In this review we explain Janus kinase (JAK)/STAT signaling and focus on STAT3, which is transient from cytoplasm to nucleus after phosphorylation. This procedure controls fundamental biological processes by regulating nuclear genes controlling cell proliferation, survival, and development. In some hematopoietic disorders and cancers, overexpression and activation of STAT3 result in high proliferation, suppression of cell differentiation and inhibition of cell maturation. This article focuses on STAT3 and its role in malignancy, in addition to the role of microRNAs (miRNAs) on STAT3 activation in certain cancers.
NASA Astrophysics Data System (ADS)
De Marchi, Luca; Marzani, Alessandro; Moll, Jochen; Kudela, Paweł; Radzieński, Maciej; Ostachowicz, Wiesław
2017-07-01
The performance of Lamb wave based monitoring systems, both in terms of diagnosis time and data complexity, can be enhanced by increasing the number of transducers used to actuate simultaneously the guided waves in the inspected medium. However, in case of multiple simultaneously-operated actuators the interference among the excited wave modes within the acquired signals has to be considered for the further processing. To this aim, in this work a code division strategy based on the Warped Frequency Transform is presented. At first, the proposed procedure encodes actuation pulses using Gold sequences. Next, for each considered actuator the acquired signals are compensated from dispersion by cross correlating the warped version of the actuated and received signals. Compensated signals form the base for a final wavenumber imaging meant at emphasizing defects and or anomalies by removing incident wavefield and edge reflections. The proposed strategy is tested numerically, and validated through an experiment in which guided waves are actuated in a plate by four piezoelectric transducers operating simultaneously.
Signal identification in acoustic emission monitoring of fatigue cracking in steel bridges
NASA Astrophysics Data System (ADS)
Yu, Jianguo P.; Ziehl, Paul; Pollock, Adrian
2012-04-01
Signal identification including noise filtering and reduction of acquired signals is needed to achieve efficient and accurate data interpretation for remote acoustic emission (AE) monitoring of in-service steel bridges. Noise filtering may ensure that genuine hits from crack growth are involved in the estimation of fatigue damage and remaining fatigue life. Reduction of the data quantity is desirable for the sensing system to conserve energy in the data transmission and processing procedures. Identification and categorization of acquired signals is a promising approach to effectively filter and reduce AE data in the application of bridge monitoring. In this study an investigation on waveform features (time domain and frequency domain) and relevant filters is carried out using the results from AE monitored fatigue tests. It is verified that duration-amplitude (D-A) filters are effective to discriminate against noise for results of steel fatigue tests. The study is helpful to find an appropriate AE data filtering protocol for field implementations.
Clinical system for non-invasive in situ monitoring of gases in the human paranasal sinuses.
Lewander, Märta; Guan, Zuguang; Svanberg, Katarina; Svanberg, Sune; Svensson, Tomas
2009-06-22
We present a portable system for non-invasive, simultaneous sensing of molecular oxygen (O(2)) and water vapor (H(2)O) in the human paranasal cavities. The system is based on high-resolution tunable diode laser spectroscopy (TDLAS) and digital wavelength modulation spectroscopy (dWMS). Since optical interference and non-ideal tuning of the diode lasers render signal processing complex, we focus on Fourier analysis of dWMS signals and procedures for removal of background signals. Clinical data are presented, and exhibit a significant improvement in signal-to-noise with respect to earlier work. The in situ detection limit, in terms of absorption fraction, is about 5x10(-5) for oxygen and 5x10(-4) for water vapor, but varies between patients due to differences in light attenuation. In addition, we discuss the use of water vapor as a reference in quantification of in situ oxygen concentration in detail. In particular, light propagation aspects are investigated by employing photon time-of-flight spectroscopy.
Liu, Yan; Song, Yang; Madahar, Vipul; Liao, Jiayu
2012-03-01
Förster resonance energy transfer (FRET) technology has been widely used in biological and biomedical research, and it is a very powerful tool for elucidating protein interactions in either dynamic or steady state. SUMOylation (the process of SUMO [small ubiquitin-like modifier] conjugation to substrates) is an important posttranslational protein modification with critical roles in multiple biological processes. Conjugating SUMO to substrates requires an enzymatic cascade. Sentrin/SUMO-specific proteases (SENPs) act as an endopeptidase to process the pre-SUMO or as an isopeptidase to deconjugate SUMO from its substrate. To fully understand the roles of SENPs in the SUMOylation cycle, it is critical to understand their kinetics. Here, we report a novel development of a quantitative FRET-based protease assay for SENP1 kinetic parameter determination. The assay is based on the quantitative analysis of the FRET signal from the total fluorescent signal at acceptor emission wavelength, which consists of three components: donor (CyPet-SUMO1) emission, acceptor (YPet) emission, and FRET signal during the digestion process. Subsequently, we developed novel theoretical and experimental procedures to determine the kinetic parameters, k(cat), K(M), and catalytic efficiency (k(cat)/K(M)) of catalytic domain SENP1 toward pre-SUMO1. Importantly, the general principles of this quantitative FRET-based protease kinetic determination can be applied to other proteases. Copyright © 2011 Elsevier Inc. All rights reserved.
Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding
2018-02-01
Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.
EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data
NASA Astrophysics Data System (ADS)
D'Amico, G.; Amodeo, A.; Mattis, I.; Freudenthaler, V.; Pappalardo, G.
2015-10-01
In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
Combined analysis of cortical (EEG) and nerve stump signals improves robotic hand control.
Tombini, Mario; Rigosa, Jacopo; Zappasodi, Filippo; Porcaro, Camillo; Citi, Luca; Carpaneto, Jacopo; Rossini, Paolo Maria; Micera, Silvestro
2012-01-01
Interfacing an amputee's upper-extremity stump nerves to control a robotic hand requires training of the individual and algorithms to process interactions between cortical and peripheral signals. To evaluate for the first time whether EEG-driven analysis of peripheral neural signals as an amputee practices could improve the classification of motor commands. Four thin-film longitudinal intrafascicular electrodes (tf-LIFEs-4) were implanted in the median and ulnar nerves of the stump in the distal upper arm for 4 weeks. Artificial intelligence classifiers were implemented to analyze LIFE signals recorded while the participant tried to perform 3 different hand and finger movements as pictures representing these tasks were randomly presented on a screen. In the final week, the participant was trained to perform the same movements with a robotic hand prosthesis through modulation of tf-LIFE-4 signals. To improve the classification performance, an event-related desynchronization/synchronization (ERD/ERS) procedure was applied to EEG data to identify the exact timing of each motor command. Real-time control of neural (motor) output was achieved by the participant. By focusing electroneurographic (ENG) signal analysis in an EEG-driven time window, movement classification performance improved. After training, the participant regained normal modulation of background rhythms for movement preparation (α/β band desynchronization) in the sensorimotor area contralateral to the missing limb. Moreover, coherence analysis found a restored α band synchronization of Rolandic area with frontal and parietal ipsilateral regions, similar to that observed in the opposite hemisphere for movement of the intact hand. Of note, phantom limb pain (PLP) resolved for several months. Combining information from both cortical (EEG) and stump nerve (ENG) signals improved the classification performance compared with tf-LIFE signals processing alone; training led to cortical reorganization and mitigation of PLP.
NASA Astrophysics Data System (ADS)
Yang, Lei; Gong, Jie; Ume, I. Charles
2014-02-01
In modern surface mount packaging technologies, such as flip chips, chip scale packages, and ball grid arrays(BGA), chips are attached to the substrates/printed wiring board (PWB) using solder bump interconnections. The quality of solder bumps between the chips and the substrate/board is difficult to inspect. Laser ultrasonic-interferometric technique was proved to be a promising approach for solder bump inspection because of its noncontact and nondestructive characteristics. Different indicators extracted from received signals have been used to predict the potential defects, such as correlation coefficient, error ratio, frequency shifting, etc. However, the fundamental understanding of the chip behavior under laser ultrasonic inspection is still missing. Specifically, it is not sure whether the laser interferometer detected out-of-plane displacements were due to wave propagation or structural vibration when the chip was excited by pulsed laser. Plus, it is found that the received signals are chip dependent. Both challenges impede the interpretation of acquired signals. In this paper, a C-scan method was proposed to study the underlying phenomenon during laser ultrasonic inspection. The full chip was inspected. The response of the chip under laser excitation was visualized in a movie resulted from acquired signals. Specifically, a BGA chip was investigated to demonstrate the effectiveness of this method. By characterizing signals using discrete wavelet transform(DWT), both ultrasonic wave propagation and vibration were observed. Separation of them was successfully achieved using ideal band-pass filter and visualized in resultant movies, too. The observed ultrasonic waves were characterized and their respective speeds were measured by applying 2-D FFT. The C-scan method, combined with different digital signal processing techniques, was proved to be an very effective methodology to learn the behavior of chips under laser excitation. This general procedure can be applied to any unknown chip before inspection. A wealth of information can be provided by this learning procedure, which greatly benefits the interpretation of inspection signals afterwards.
On the effects of signal processing on sample entropy for postural control.
Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal
2018-01-01
Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.
BayesMotif: de novo protein sorting motif discovery from impure datasets.
Hu, Jianjun; Zhang, Fan
2010-01-18
Protein sorting is the process that newly synthesized proteins are transported to their target locations within or outside of the cell. This process is precisely regulated by protein sorting signals in different forms. A major category of sorting signals are amino acid sub-sequences usually located at the N-terminals or C-terminals of protein sequences. Genome-wide experimental identification of protein sorting signals is extremely time-consuming and costly. Effective computational algorithms for de novo discovery of protein sorting signals is needed to improve the understanding of protein sorting mechanisms. We formulated the protein sorting motif discovery problem as a classification problem and proposed a Bayesian classifier based algorithm (BayesMotif) for de novo identification of a common type of protein sorting motifs in which a highly conserved anchor is present along with a less conserved motif regions. A false positive removal procedure is developed to iteratively remove sequences that are unlikely to contain true motifs so that the algorithm can identify motifs from impure input sequences. Experiments on both implanted motif datasets and real-world datasets showed that the enhanced BayesMotif algorithm can identify anchored sorting motifs from pure or impure protein sequence dataset. It also shows that the false positive removal procedure can help to identify true motifs even when there is only 20% of the input sequences containing true motif instances. We proposed BayesMotif, a novel Bayesian classification based algorithm for de novo discovery of a special category of anchored protein sorting motifs from impure datasets. Compared to conventional motif discovery algorithms such as MEME, our algorithm can find less-conserved motifs with short highly conserved anchors. Our algorithm also has the advantage of easy incorporation of additional meta-sequence features such as hydrophobicity or charge of the motifs which may help to overcome the limitations of PWM (position weight matrix) motif model.
A new calibration code for the JET polarimeter.
Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E
2010-05-01
An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.
Alsep data processing: How we processed Apollo Lunar Seismic Data
NASA Technical Reports Server (NTRS)
Latham, G. V.; Nakamura, Y.; Dorman, H. J.
1979-01-01
The Apollo lunar seismic station network gathered data continuously at a rate of 3 x 10 to the 8th power bits per day for nearly eight years until the termination in September, 1977. The data were processed and analyzed using a PDP-15 minicomputer. On the average, 1500 long-period seismic events were detected yearly. Automatic event detection and identification schemes proved unsuccessful because of occasional high noise levels and, above all, the risk of overlooking unusual natural events. The processing procedures finally settled on consist of first plotting all the data on a compressed time scale, visually picking events from the plots, transferring event data to separate sets of tapes and performing detailed analyses using the latter. Many problems remain especially for automatically processing extraterrestrial seismic signals.
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
A Novel and Simple Spike Sorting Implementation.
Petrantonakis, Panagiotis C; Poirazi, Panayiota
2017-04-01
Monitoring the activity of multiple, individual neurons that fire spikes in the vicinity of an electrode, namely perform a Spike Sorting (SS) procedure, comprises one of the most important tools for contemporary neuroscience in order to reverse-engineer the brain. As recording electrodes' technology rabidly evolves by integrating thousands of electrodes in a confined spatial setting, the algorithms that are used to monitor individual neurons from recorded signals have to become even more reliable and computationally efficient. In this work, we propose a novel framework of the SS approach in which a single-step processing of the raw (unfiltered) extracellular signal is sufficient for both the detection and sorting of the activity of individual neurons. Despite its simplicity, the proposed approach exhibits comparable performance with state-of-the-art approaches, especially for spike detection in noisy signals, and paves the way for a new family of SS algorithms with the potential for multi-recording, fast, on-chip implementations.
NASA Astrophysics Data System (ADS)
Hu, Chongqing; Li, Aihua; Zhao, Xingyang
2011-02-01
This paper proposes a multivariate statistical analysis approach to processing the instantaneous engine speed signal for the purpose of locating multiple misfire events in internal combustion engines. The state of each cylinder is described with a characteristic vector extracted from the instantaneous engine speed signal following a three-step procedure. These characteristic vectors are considered as the values of various procedure parameters of an engine cycle. Therefore, determination of occurrence of misfire events and identification of misfiring cylinders can be accomplished by a principal component analysis (PCA) based pattern recognition methodology. The proposed algorithm can be implemented easily in practice because the threshold can be defined adaptively without the information of operating conditions. Besides, the effect of torsional vibration on the engine speed waveform is interpreted as the presence of super powerful cylinder, which is also isolated by the algorithm. The misfiring cylinder and the super powerful cylinder are often adjacent in the firing sequence, thus missing detections and false alarms can be avoided effectively by checking the relationship between the cylinders.
A large-scale and robust dynamic MRM study of colorectal cancer biomarkers.
You, Jia; Kao, Athit; Dillon, Roslyn; Croner, Lisa J; Benz, Ryan; Blume, John E; Wilcox, Bruce
2018-06-25
Over the past 20 years, mass spectrometry (MS) has emerged as a dynamic tool for proteomics biomarker discovery. However, published MS biomarker candidates often do not translate to the clinic, failing during attempts at independent replication. The cause can be shortcomings in study design, sample quality, assay quantitation, and/or quality/process control. To address these shortcomings, we developed an MS workflow in accordance with Tier 2 measurement requirements for targeted peptides, defined by the Clinical Proteomic Tumor Analysis Consortium (CPTAC) "fit-for-purpose" approach, using dynamic multiple reaction monitoring (dMRM) which measures specific peptide transitions during predefined retention time (RT) windows. We describe the development of a robust multipex dMRM assay measuring 641 proteotypic peptides from 392 colorectal cancer (CRC) related proteins, and the procedures to track and handle sample processing and instrument variation over a four-month study, during which the assay measured blood samples from 1045 patients with CRC symptoms. After data collection, transitions were filtered by signal quality metrics before entering receiver operating characteristic (ROC) analysis. The results demonstrated CRC signal carried by 127 proteins in the symptomatic population. The workflow might be further developed to build Tier 1 assays for clinical tests identifying symptomatic individuals at elevated risk of CRC. We developed a dMRM MS method with the rigor of a Tier 2 assay as defined by the CPTAC 'fit for purpose approach' [1]. Using quality and process control procedures, the assay was used to quantify 641 proteotypic peptides representing 392 CRC-related proteins in plasma from 1045 CRC-symptomatic patients. To our knowledge, this is the largest MRM method applied to the largest study to date. The results showed that 127 of the proteins carried univariate CRC signal in the symptomatic population. This large number of single biomarkers bodes well for future development of multivariate classifiers to distinguish CRC in the symptomatic population. Copyright © 2018. Published by Elsevier B.V.
Lotte, Fabien; Larrue, Florian; Mühl, Christian
2013-01-01
While recent research on Brain-Computer Interfaces (BCI) has highlighted their potential for many applications, they remain barely used outside laboratories. The main reason is their lack of robustness. Indeed, with current BCI, mental state recognition is usually slow and often incorrect. Spontaneous BCI (i.e., mental imagery-based BCI) often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable ElectroEncephaloGraphy (EEG) patterns (spontaneous BCI control being widely acknowledged as a skill) while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the human in the loop. However, how well the user masters the BCI skill is also a key element explaining BCI robustness. Indeed, if the user is not able to produce stable and distinct EEG patterns, then no signal processing algorithm would be able to recognize them. Unfortunately, despite the importance of BCI training protocols, they have been scarcely studied so far, and used mostly unchanged for years. In this paper, we advocate that current human training approaches for spontaneous BCI are most likely inappropriate. We notably study instructional design literature in order to identify the key requirements and guidelines for a successful training procedure that promotes a good and efficient skill learning. This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal. We therefore identify the flaws in BCI training protocols according to instructional design principles, at several levels: in the instructions provided to the user, in the tasks he/she has to perform, and in the feedback provided. For each level, we propose new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more efficiently. PMID:24062669
Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.
2017-01-01
Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579
Muñoz-Cobo, José Luis; Chiva, Sergio; Méndez, Santos; Monrós, Guillem; Escrivá, Alberto; Cuadros, José Luis
2017-05-10
This paper describes all the procedures and methods currently used at UPV (Universitat Politécnica de Valencia) and UJI (University Jaume I) for the development and use of sensors for multi-phase flow analysis in vertical pipes. This paper also describes the methods that we use to obtain the values of the two-phase flow magnitudes from the sensor signals and the validation and cross-verification methods developed to check the consistency of the results obtained for these magnitudes with the sensors. First, we provide information about the procedures used to build the multi-sensor conductivity probes and some of the tests performed with different materials to avoid sensor degradation issues. In addition, we provide information about the characteristics of the electric circuits that feed the sensors. Then the data acquisition of the conductivity probe, the signal conditioning and the data processing including the device that have been designed to automatize all the measurement process of moving the sensors inside the channels by means of stepper electric motors controlled by computer are shown in operation. Then, we explain the methods used for bubble identification and categorization. Finally, we describe the methodology used to obtain the two-phase flow information from the sensor signals. This includes the following items: void fraction, gas velocity, Sauter mean diameter and interfacial area concentration. The last part of this paper is devoted to the conductance probes developed for the annular flow analysis, which includes the analysis of the interfacial waves produced in annular flow and that requires a different type of sensor.
Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S
2017-11-01
Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Muñoz-Cobo, José Luis; Chiva, Sergio; Méndez, Santos; Monrós, Guillem; Escrivá, Alberto; Cuadros, José Luis
2017-01-01
This paper describes all the procedures and methods currently used at UPV (Universitat Politécnica de Valencia) and UJI (University Jaume I) for the development and use of sensors for multi-phase flow analysis in vertical pipes. This paper also describes the methods that we use to obtain the values of the two-phase flow magnitudes from the sensor signals and the validation and cross-verification methods developed to check the consistency of the results obtained for these magnitudes with the sensors. First, we provide information about the procedures used to build the multi-sensor conductivity probes and some of the tests performed with different materials to avoid sensor degradation issues. In addition, we provide information about the characteristics of the electric circuits that feed the sensors. Then the data acquisition of the conductivity probe, the signal conditioning and the data processing including the device that have been designed to automatize all the measurement process of moving the sensors inside the channels by means of stepper electric motors controlled by computer are shown in operation. Then, we explain the methods used for bubble identification and categorization. Finally, we describe the methodology used to obtain the two-phase flow information from the sensor signals. This includes the following items: void fraction, gas velocity, Sauter mean diameter and interfacial area concentration. The last part of this paper is devoted to the conductance probes developed for the annular flow analysis, which includes the analysis of the interfacial waves produced in annular flow and that requires a different type of sensor. PMID:28489035
Frequency domain measurement systems
NASA Technical Reports Server (NTRS)
Eischer, M. C.
1978-01-01
Stable frequency sources and signal processing blocks were characterized by their noise spectra, both discrete and random, in the frequency domain. Conventional measures are outlined, and systems for performing the measurements are described. Broad coverage of system configurations which were found useful is given. Their functioning and areas of application are discussed briefly. Particular attention is given to some of the potential error sources in the measurement procedures, system configurations, double-balanced-mixer-phase-detectors, and application of measuring instruments.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.
2014-02-01
Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.
1988-01-01
Deblurring This long-standing research area was wrapped up this year with the preparation of a major tutorial paper. This paper summarizes all of the work...that we have done. The iterative procedures were shown to perform significantly better at the deblurring task than Kalman filtering, Wiener filtering...suited to the resolution of multiple impulsive sources on a uniform background. Such applications occur in radio astronomy and in a number of
Method for distinguishing multiple targets using time-reversal acoustics
Berryman, James G.
2004-06-29
A method for distinguishing multiple targets using time-reversal acoustics. Time-reversal acoustics uses an iterative process to determine the optimum signal for locating a strongly reflecting target in a cluttered environment. An acoustic array sends a signal into a medium, and then receives the returned/reflected signal. This returned/reflected signal is then time-reversed and sent back into the medium again, and again, until the signal being sent and received is no longer changing. At that point, the array has isolated the largest eigenvalue/eigenvector combination and has effectively determined the location of a single target in the medium (the one that is most strongly reflecting). After the largest eigenvalue/eigenvector combination has been determined, to determine the location of other targets, instead of sending back the same signals, the method sends back these time reversed signals, but half of them will also be reversed in sign. There are various possibilities for choosing which half to do sign reversal. The most obvious choice is to reverse every other one in a linear array, or as in a checkerboard pattern in 2D. Then, a new send/receive, send-time reversed/receive iteration can proceed. Often, the first iteration in this sequence will be close to the desired signal from a second target. In some cases, orthogonalization procedures must be implemented to assure the returned signals are in fact orthogonal to the first eigenvector found.
Rapid Prototyping of a Smart Device-based Wireless Reflectance Photoplethysmograph
Ghamari, M.; Aguilar, C.; Soltanpur, C.; Nazeran, H.
2017-01-01
This paper presents the design, fabrication, and testing of a wireless heart rate (HR) monitoring device based on photoplethysmography (PPG) and smart devices. PPG sensors use infrared (IR) light to obtain vital information to assess cardiac health and other physiologic conditions. The PPG data that are transferred to a computer undergo further processing to derive the Heart Rate Variability (HRV) signal, which is analyzed to generate quantitative markers of the Autonomic Nervous System (ANS). The HRV signal has numerous monitoring and diagnostic applications. To this end, wireless connectivity plays an important role in such biomedical instruments. The photoplethysmograph consists of an optical sensor to detect the changes in the light intensity reflected from the illuminated tissue, a signal conditioning unit to prepare the reflected light for further signal conditioning through amplification and filtering, a low-power microcontroller to control and digitize the analog PPG signal, and a Bluetooth module to transmit the digital data to a Bluetooth-based smart device such as a tablet. An Android app is then used to enable the smart device to acquire and digitally display the received analog PPG signal in real-time on the smart device. This article is concluded with the prototyping of the wireless PPG followed by the verification procedures of the PPG and HRV signals acquired in a laboratory environment. PMID:28959119
Rapid Prototyping of a Smart Device-based Wireless Reflectance Photoplethysmograph.
Ghamari, M; Aguilar, C; Soltanpur, C; Nazeran, H
2016-03-01
This paper presents the design, fabrication, and testing of a wireless heart rate (HR) monitoring device based on photoplethysmography (PPG) and smart devices. PPG sensors use infrared (IR) light to obtain vital information to assess cardiac health and other physiologic conditions. The PPG data that are transferred to a computer undergo further processing to derive the Heart Rate Variability (HRV) signal, which is analyzed to generate quantitative markers of the Autonomic Nervous System (ANS). The HRV signal has numerous monitoring and diagnostic applications. To this end, wireless connectivity plays an important role in such biomedical instruments. The photoplethysmograph consists of an optical sensor to detect the changes in the light intensity reflected from the illuminated tissue, a signal conditioning unit to prepare the reflected light for further signal conditioning through amplification and filtering, a low-power microcontroller to control and digitize the analog PPG signal, and a Bluetooth module to transmit the digital data to a Bluetooth-based smart device such as a tablet. An Android app is then used to enable the smart device to acquire and digitally display the received analog PPG signal in real-time on the smart device. This article is concluded with the prototyping of the wireless PPG followed by the verification procedures of the PPG and HRV signals acquired in a laboratory environment.
Pagan, Marino
2014-01-01
The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance (d′). PMID:24920017
Estimating proportions of objects from multispectral scanner data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Lewis, J. T.; Pentland, A. P.
1975-01-01
Progress is reported in developing and testing methods of estimating, from multispectral scanner data, proportions of target classes in a scene when there are a significiant number of boundary pixels. Procedures were developed to exploit: (1) prior information concerning the number of object classes normally occurring in a pixel, and (2) spectral information extracted from signals of adjoining pixels. Two algorithms, LIMMIX and nine-point mixtures, are described along with supporting processing techniques. An important by-product of the procedures, in contrast to the previous method, is that they are often appropriate when the number of spectral bands is small. Preliminary tests on LANDSAT data sets, where target classes were (1) lakes and ponds, and (2) agricultural crops were encouraging.
Solution for the nonuniformity correction of infrared focal plane arrays.
Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao
2005-05-20
Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.
Thermometric MIP sensor for fructosyl valine.
Rajkumar, Rajagopal; Katterle, Martin; Warsinke, Axel; Möhwald, Helmuth; Scheller, Frieder W
2008-02-28
Interactions of molecularly imprinted polymers containing phenyl boronic acid residues with fructosyl valine, fructose and pinacol, respectively are analysed in aqueous solution (pH 11.4) by using a flow calorimeter. The reversible formation of (two) cyclic boronic acid diesters per fructosyl molecule generates a 40-fold higher exothermic signal as compared to the control polymer. Whereas binding of pinacol to either the MIP or the control polymer generates a very small endothermic signal reflecting a negligible contribution of the esterification to the overall process. An "apparent imprinting factor" of 41 is found which exceeds the respective value of batch binding procedures by a factor of 30. Furthermore, the MIP sensor was used to characterise the crossreactivity. The influence of shape selective molecular recognition is discussed.
NASA Technical Reports Server (NTRS)
Praddaude, H. C.; Woskoboinikow, P.
1978-01-01
A thorough discussion of submillimeter laser Thomson scattering for the measurement of ion temperature in plasmas is presented. This technique is very promising and work is being actively pursued on the high power lasers and receivers necessary for its implementation. In this report we perform an overall system analysis of the Thomson scattering technique aimed to: (1) identify problem areas; (2) establish specifications for the main components of the apparatus; (3) study signal processing alternatives and identify the optimum signal handling procedure. Because of its importance for the successful implementation of this technique, we also review the work presently being carried out on the optically pumped submillimeter CH3F and D2O lasers.
47 CFR 80.329 - Safety signals and messages.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Safety signals and messages. 80.329 Section 80... Safety Procedures § 80.329 Safety signals and messages. (a) The safety signal indicates that the station... warnings. (b) In radiotelegraphy, the safety signal consists of three repetitions of the group TTT, sent...
47 CFR 80.329 - Safety signals and messages.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Safety signals and messages. 80.329 Section 80... Safety Procedures § 80.329 Safety signals and messages. (a) The safety signal indicates that the station... warnings. (b) In radiotelegraphy, the safety signal consists of three repetitions of the group TTT, sent...
47 CFR 80.329 - Safety signals and messages.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Safety signals and messages. 80.329 Section 80... Safety Procedures § 80.329 Safety signals and messages. (a) The safety signal indicates that the station... warnings. (b) In radiotelegraphy, the safety signal consists of three repetitions of the group TTT, sent...
An adaptive confidence limit for periodic non-steady conditions fault detection
NASA Astrophysics Data System (ADS)
Wang, Tianzhen; Wu, Hao; Ni, Mengqi; Zhang, Milu; Dong, Jingjing; Benbouzid, Mohamed El Hachemi; Hu, Xiong
2016-05-01
System monitoring has become a major concern in batch process due to the fact that failure rate in non-steady conditions is much higher than in steady ones. A series of approaches based on PCA have already solved problems such as data dimensionality reduction, multivariable decorrelation, and processing non-changing signal. However, if the data follows non-Gaussian distribution or the variables contain some signal changes, the above approaches are not applicable. To deal with these concerns and to enhance performance in multiperiod data processing, this paper proposes a fault detection method using adaptive confidence limit (ACL) in periodic non-steady conditions. The proposed ACL method achieves four main enhancements: Longitudinal-Standardization could convert non-Gaussian sampling data to Gaussian ones; the multiperiod PCA algorithm could reduce dimensionality, remove correlation, and improve the monitoring accuracy; the adaptive confidence limit could detect faults under non-steady conditions; the fault sections determination procedure could select the appropriate parameter of the adaptive confidence limit. The achieved result analysis clearly shows that the proposed ACL method is superior to other fault detection approaches under periodic non-steady conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolshov, Mikhail A; Kuritsyn, Yu A; Liger, V V
2009-09-30
We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 {mu}m. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature andmore » H{sub 2}O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged ({approx}50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is {approx}40 K. (laser applications and other topics in quantum electronics)« less
NASA Astrophysics Data System (ADS)
Bolshov, Mikhail A.; Kuritsyn, Yu A.; Liger, V. V.; Mironenko, V. R.; Leonov, S. B.; Yarantsev, D. A.
2009-09-01
We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 μm. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature and H2O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged (~50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is ~40 K.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
NASA Astrophysics Data System (ADS)
Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae
2016-12-01
A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.
Black, Bryan A; Griffin, Daniel; van der Sleen, Peter; Wanamaker, Alan D; Speer, James H; Frank, David C; Stahle, David W; Pederson, Neil; Copenheaver, Carolyn A; Trouet, Valerie; Griffin, Shelly; Gillanders, Bronwyn M
2016-07-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time. Originally developed for tree-ring data, crossdating is the only such procedure that ensures all increments have been assigned the correct calendar year of formation. Here, we use growth-increment data from two tree species, two marine bivalve species, and a marine fish species to illustrate sensitivity of environmental signals to modest dating error rates. When falsely added or missed increments are induced at one and five percent rates, errors propagate back through time and eliminate high-frequency variability, climate signals, and evidence of extreme events while incorrectly dating and distorting major disturbances or other low-frequency processes. Our consecutive Monte Carlo experiments show that inaccuracies begin to accumulate in as little as two decades and can remove all but decadal-scale processes after as little as two centuries. Real-world scenarios may have even greater consequence in the absence of crossdating. Given this sensitivity to signal loss, the fundamental tenets of crossdating must be applied to fully resolve environmental signals, a point we underscore as the frontiers of growth-increment analysis continue to expand into tropical, freshwater, and marine environments. © 2016 John Wiley & Sons Ltd.
Winter, Randolph L; Budke, Christine M
2017-08-15
OBJECTIVE To assess signalment and concurrent disease processes in dogs with aortic thrombotic disease (ATD). DESIGN Retrospective case-control study. ANIMALS Dogs examined at North American veterinary teaching hospitals from 1985 through 2011 with medical records submitted to the Veterinary Medical Database. PROCEDURES Medical records were reviewed to identify dogs with a diagnosis of ATD (case dogs). Five control dogs without a diagnosis of ATD were then identified for every case dog. Data were collected regarding dog age, sex, breed, body weight, and concurrent disease processes. RESULTS ATD was diagnosed in 291 of the 984,973 (0.03%) dogs included in the database. The odds of a dog having ATD did not differ significantly by sex, age, or body weight. Compared with mixed-breed dogs, Shetland Sheepdogs had a significantly higher odds of ATD (OR, 2.59). Protein-losing nephropathy (64/291 [22%]) was the most commonly recorded concurrent disease in dogs with ATD. CONCLUSIONS AND CLINICAL RELEVANCE Dogs with ATD did not differ significantly from dogs without ATD in most signalment variables. Contrary to previous reports, cardiac disease was not a common concurrent diagnosis in dogs with ATD.
Rapid Genetic Analysis of Epithelial-Mesenchymal Signaling During Hair Regeneration
Zhen, Hanson H.; Oro, Anthony E.
2013-01-01
Hair follicle morphogenesis, a complex process requiring interaction between epithelia-derived keratinocytes and the underlying mesenchyme, is an attractive model system to study organ development and tissue-specific signaling. Although hair follicle development is genetically tractable, fast and reproducible analysis of factors essential for this process remains a challenge. Here we describe a procedure to generate targeted overexpression or shRNA-mediated knockdown of factors using lentivirus in a tissue-specific manner. Using a modified version of a hair regeneration model 5, 6, 11, we can achieve robust gain- or loss-of-function analysis in primary mouse keratinocytes or dermal cells to facilitate study of epithelial-mesenchymal signaling pathways that lead to hair follicle morphogenesis. We describe how to isolate fresh primary mouse keratinocytes and dermal cells, which contain dermal papilla cells and their precursors, deliver lentivirus containing either shRNA or cDNA to one of the cell populations, and combine the cells to generate fully formed hair follicles on the backs of nude mice. This approach allows analysis of tissue-specific factors required to generate hair follicles within three weeks and provides a fast and convenient companion to existing genetic models. PMID:23486463
Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution
Park, Yeonseok; Choi, Anthony
2017-01-01
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625
Characterization of electroencephalography signals for estimating saliency features in videos.
Liang, Zhen; Hamada, Yasuyuki; Oba, Shigeyuki; Ishii, Shin
2018-05-12
Understanding the functions of the visual system has been one of the major targets in neuroscience formany years. However, the relation between spontaneous brain activities and visual saliency in natural stimuli has yet to be elucidated. In this study, we developed an optimized machine learning-based decoding model to explore the possible relationships between the electroencephalography (EEG) characteristics and visual saliency. The optimal features were extracted from the EEG signals and saliency map which was computed according to an unsupervised saliency model ( Tavakoli and Laaksonen, 2017). Subsequently, various unsupervised feature selection/extraction techniques were examined using different supervised regression models. The robustness of the presented model was fully verified by means of ten-fold or nested cross validation procedure, and promising results were achieved in the reconstruction of saliency features based on the selected EEG characteristics. Through the successful demonstration of using EEG characteristics to predict the real-time saliency distribution in natural videos, we suggest the feasibility of quantifying visual content through measuring brain activities (EEG signals) in real environments, which would facilitate the understanding of cortical involvement in the processing of natural visual stimuli and application developments motivated by human visual processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
New procedures of ergonomics design in a large oil company.
Alhadeff, Cynthia Mossé; Silva, Rosana Fernandes da; Reis, Márcia Sales dos
2012-01-01
This study presents the challenge involved in the negotiation and construction of a standard process in a major petroleum company that has the purpose of guiding the implementation of ergonomic studies in the development of projects, systemising the implementation of ergonomics design. The standard was created by a multi-disciplinary working group consisting of specialists in ergonomics, who work in a number of different areas of the company. The objective was to guide "how to" undertake ergonomics in all projects, taking into consideration the development of the ergonomic appraisals of work. It also established that all the process, in each project phase, should be accompanied by a specialist in ergonomics. This process as an innovation in the conception of projects in this company, signals a change of culture, and, for this reason requires broad dissemination throughout the several company leadership levels, and training of professionals in projects of ergonomics design. An implementation plan was also prepared and approved by the corporate governance, complementing the proposed challenge. In this way, this major oil company will implement new procedures of ergonomics design to promote health, safety, and wellbeing of the workforce, besides improving the performance and reliability of its systems and processes.
Real-time windowing in imaging radar using FPGA technique
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique
2005-02-01
The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.
DOT National Transportation Integrated Search
2008-06-01
This report serves as a comprehensive guide to traffic signal timing and documents the tasks completed in association with its : development. The focus of this document is on traffic signal control principles, practices, and procedures. It describes ...
47 CFR 80.327 - Urgency signals and messages.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Urgency signals and messages. 80.327 Section 80... Safety Procedures § 80.327 Urgency signals and messages. (a) The urgency signal indicates that the... vehicle, or the safety of a person. The urgency signal must be sent only on the authority of the master or...
47 CFR 80.327 - Urgency signals and messages.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Urgency signals and messages. 80.327 Section 80... Safety Procedures § 80.327 Urgency signals and messages. (a) The urgency signal indicates that the... vehicle, or the safety of a person. The urgency signal must be sent only on the authority of the master or...
47 CFR 80.327 - Urgency signals and messages.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Urgency signals and messages. 80.327 Section 80... Safety Procedures § 80.327 Urgency signals and messages. (a) The urgency signal indicates that the... vehicle, or the safety of a person. The urgency signal must be sent only on the authority of the master or...
47 CFR 80.327 - Urgency signals and messages.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Urgency signals and messages. 80.327 Section 80... Safety Procedures § 80.327 Urgency signals and messages. (a) The urgency signal indicates that the... vehicle, or the safety of a person. The urgency signal must be sent only on the authority of the master or...
47 CFR 80.327 - Urgency signals and messages.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Urgency signals and messages. 80.327 Section 80... Safety Procedures § 80.327 Urgency signals and messages. (a) The urgency signal indicates that the... vehicle, or the safety of a person. The urgency signal must be sent only on the authority of the master or...
Characterizing Speech Intelligibility in Noise After Wide Dynamic Range Compression.
Rhebergen, Koenraad S; Maalderink, Thijs H; Dreschler, Wouter A
The effects of nonlinear signal processing on speech intelligibility in noise are difficult to evaluate. Often, the effects are examined by comparing speech intelligibility scores with and without processing measured at fixed signal to noise ratios (SNRs) or by comparing the adaptive measured speech reception thresholds corresponding to 50% intelligibility (SRT50) with and without processing. These outcome measures might not be optimal. Measuring at fixed SNRs can be affected by ceiling or floor effects, because the range of relevant SNRs is not know in advance. The SRT50 is less time consuming, has a fixed performance level (i.e., 50% correct), but the SRT50 could give a limited view, because we hypothesize that the effect of most nonlinear signal processing algorithms at the SRT50 cannot be generalized to other points of the psychometric function. In this article, we tested the value of estimating the entire psychometric function. We studied the effect of wide dynamic range compression (WDRC) on speech intelligibility in stationary, and interrupted speech-shaped noise in normal-hearing subjects, using a fast method-based local linear fitting approach and by two adaptive procedures. The measured performance differences for conditions with and without WDRC for the psychometric functions in stationary noise and interrupted speech-shaped noise show that the effects of WDRC on speech intelligibility are SNR dependent. We conclude that favorable and unfavorable effects of WDRC on speech intelligibility can be missed if the results are presented in terms of SRT50 values only.
Processing of the Liquid Xenon calorimeter's signals for timing measurements
NASA Astrophysics Data System (ADS)
Epshteyn, L. B.; Yudin, Yu V.
2014-09-01
One of the goals of the Cryogenic Magnetic Detector at Budker Institute of Nuclear Physics SB RAS (Novosibirsk, Russia) is a study of nucleons production in electron-positron collisions near threshold. The neutron-antineutron pair production events can be detected only by the calorimeters. In the barrel calorimeter the antineutron annihilation typically occurs by 5 ns or later after beams crossing. For identification of such events it is necessary to measure the time of flight of particles to the LXe-calorimeter with accuracy of about 3 ns. The LXe-calorimeter consists of 14 layers of ionization chambers with anode and cathode readout. The duration of charge collection to the anodes is about 4.5 mks, while the required accuracy of measuring of the signal arrival time is less than 1/1000 of that. Besides, the signals' shapes differ substantially from event to event, so the signal arrival time is measured in two stages. At the first stage, the signal arrival time is determined with an accuracy of 1-2 discretization periods, and initial values of parameters for subsequent fitting procedure are calculated. At the second stage, the signal arrival time is determined with the required accuracy by means of fitting of the signal waveform with a template waveform. To implement that, a special electronics has been developed which performs waveform digitization and On-Line measurement of signals' arrival times and amplitudes.
Anderson, Melinda C; Arehart, Kathryn H; Souza, Pamela E
2018-02-01
Current guidelines for adult hearing aid fittings recommend the use of a prescriptive fitting rationale with real-ear verification that considers the audiogram for the determination of frequency-specific gain and ratios for wide dynamic range compression. However, the guidelines lack recommendations for how other common signal-processing features (e.g., noise reduction, frequency lowering, directional microphones) should be considered during the provision of hearing aid fittings and fine-tunings for adult patients. The purpose of this survey was to identify how audiologists make clinical decisions regarding common signal-processing features for hearing aid provision in adults. An online survey was sent to audiologists across the United States. The 22 survey questions addressed four primary topics including demographics of the responding audiologists, factors affecting selection of hearing aid devices, the approaches used in the fitting of signal-processing features, and the strategies used in the fine-tuning of these features. A total of 251 audiologists who provide hearing aid fittings to adults completed the electronically distributed survey. The respondents worked in a variety of settings including private practice, physician offices, university clinics, and hospitals/medical centers. Data analysis was based on a qualitative analysis of the question responses. The survey results for each of the four topic areas (demographics, device selection, hearing aid fitting, and hearing aid fine-tuning) are summarized descriptively. Survey responses indicate that audiologists vary in the procedures they use in fitting and fine-tuning based on the specific feature, such that the approaches used for the fitting of frequency-specific gain differ from other types of features (i.e., compression time constants, frequency lowering parameters, noise reduction strength, directional microphones, feedback management). Audiologists commonly rely on prescriptive fitting formulas and probe microphone measures for the fitting of frequency-specific gain and rely on manufacturers' default settings and recommendations for both the initial fitting and the fine-tuning of signal-processing features other than frequency-specific gain. The survey results are consistent with a lack of published protocols and guidelines for fitting and adjusting signal-processing features beyond frequency-specific gain. To streamline current practice, a transparent evidence-based tool that enables clinicians to prescribe the setting of other features from individual patient characteristics would be desirable. American Academy of Audiology
Analysis of swept-sine runs during modal identification
NASA Astrophysics Data System (ADS)
Gloth, G.; Sinapius, M.
2004-11-01
Experimental modal analysis of large aerospace structures in Europe combine nowadays the benefits of the very reliable but time-consuming phase resonance method and the application of phase separation techniques evaluating frequency response functions (FRF). FRFs of a test structure can be determined by a variety of means. Applied excitation signal waveforms include harmonic signals like stepped-sine excitation, periodic signals like multi-sine excitation, transient signals like impulse and swept-sine excitation, and stochastic signals like random. The current article focuses on slow swept-sine excitation which is a good trade-off between magnitude of excitation level needed for large aircraft and testing time. However, recent ground vibration tests (GVTs) brought up that reliable modal data from swept-sine test runs depend on a proper data processing. The article elucidates the strategy of modal analysis based on swept-sine excitation. The standards for the application of slowly swept sinusoids defined by the international organisation for standardisation in ISO 7626 part 2 are critically reviewed. The theoretical background of swept-sine testing is expounded with particular emphasis to the transition through structural resonances. The effect of different standard procedures of data processing like tracking filter, fast Fourier transform (FFT), and data reduction via averaging are investigated with respect to their influence on the FRFs and modal parameters. Particular emphasis is given to FRF distortions evoked by unsuitable data processing. All data processing methods are investigated on a numerical example. Their practical usefulness is demonstrated on test data taken from a recent GVT on a large aircraft. The revision of ISO 7626 part 2 is suggested regarding the application of slow swept-sine excitation. Recommendations about the proper FRF estimation from slow swept-sine excitation are given in order to enable the optimisation on these applications for future modal survey tests of large aerospace structures.
Triple Negative Breast Cancer and Metabolic Regulation
2015-08-01
about 100 cells in mice whose head hair was removed by Nair or equivalent chemical products. We are in the process of performing the imaging with...tumor-bearing mice, whose head hair has been shaved. We also learned that repeated Nair use is detrimental to the mice and are using a small shaver to...gently remove the hair . Tasks 1B. (months 1-18). • Conduct signaling and gene analysis outlined in Aim1 and Figure 1. All reagents and procedures
A Procedure for Measuring Latencies in Brain-Computer Interfaces
Wilson, J. Adam; Mellinger, Jürgen; Schalk, Gerwin; Williams, Justin
2011-01-01
Brain-computer interface (BCI) systems must process neural signals with consistent timing in order to support adequate system performance. Thus, it is important to have the capability to determine whether a particular BCI configuration (i.e., hardware, software) provides adequate timing performance for a particular experiment. This report presents a method of measuring and quantifying different aspects of system timing in several typical BCI experiments across a range of settings, and presents comprehensive measures of expected overall system latency for each experimental configuration. PMID:20403781
A Self-Diagnostic System for the M6 Accelerometer
NASA Technical Reports Server (NTRS)
Flanagan, Patrick M.; Lekki, John
2001-01-01
The design of a Self-Diagnostic (SD) accelerometer system for the Space Shuttle Main Engine is presented. This retrofit system connects diagnostic electronic hardware and software to the current M6 accelerometer system. This paper discusses the general operation of the M6 accelerometer SD system and procedures for developing and evaluating the SD system. Signal processing techniques using M6 accelerometer diagnostic data are explained. Test results include diagnostic data responding to changing ambient temperature, mounting torque and base mounting impedance.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
DOT National Transportation Integrated Search
2008-06-01
This report serves as a comprehensive guide to traffic signal timing and documents the tasks completed in association with its development. The focus of this document is on traffic signal control principles, practices, and procedures. It describes th...
Simulation of flashing signal operations.
DOT National Transportation Integrated Search
1982-01-01
Various guidelines that have been proposed for the operation of traffic signals in the flashing mode were reviewed. The use of existing traffic simulation procedures to evaluate flashing signals was examined and a study methodology for simulating and...
Elementary signaling modes predict the essentiality of signal transduction network components
2011-01-01
Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566
Pilly, Praveen K.; Grossberg, Stephen; Seitz, Aaron R.
2009-01-01
Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are controversial, since such learning may also be attributed to plasticity in later stages of sensory processing or in readout from sensory to decision stages, or to changes in high-level central processing. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show direction-selective learning for the designated contrast polarity that does not transfer to the opposite contrast polarity. This polarity specificity was replicated in a double training procedure in which subjects were additionally exposed to the opposite polarity. Taken together, these results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells. Finally, a theoretical explanation is provided to understand the data. PMID:19800358
Ex vivo determination of chewing patterns using FBG and artificial neural networks
NASA Astrophysics Data System (ADS)
Karam, L. Z.; Pegorini, V.; Pitta, C. S. R.; Assmann, T. S.; Cardoso, R.; Kalinowski, H. J.; Silva, J. C. C.
2014-05-01
This paper reports the experimental procedures performed in a bovine head for the determination of chewing patterns during the mastication process. Mandible movements during the chewing have been simulated either by using two plasticine materials with different textures or without material. Fibre Bragg grating sensors were fixed in the jaw to monitor the biomechanical forces involved in the chewing process. The acquired signals from the sensors fed the input of an artificial neural network aiming at the classification of the measured chewing patterns for each material used in the experiment. The results obtained from the simulation of the chewing process presented different patterns for the different textures of plasticine, resulting on the determination of three chewing patterns with a classification error of 5%.
Miyazaki, Shinsuke; Watanabe, Tomonori; Kajiyama, Takatsugu; Iwasawa, Jin; Ichijo, Sadamitsu; Nakamura, Hiroaki; Taniguchi, Hiroshi; Hirao, Kenzo; Iesaka, Yoshito
2017-12-01
Atrial fibrillation ablation is associated with substantial risks of silent cerebral events (SCEs) or silent cerebral lesions. We investigated which procedural processes during cryoballoon procedures carried a risk. Forty paroxysmal atrial fibrillation patients underwent pulmonary vein isolation using second-generation cryoballoons with single 28-mm balloon 3-minute freeze techniques. Microembolic signals (MESs) were monitored by transcranial Doppler throughout all procedures. Brain magnetic resonance imaging was obtained pre- and post-procedure in 34 patients (85.0%). Of 158 pulmonary veins, 152 (96.2%) were isolated using cryoablation, and 6 required touch-up radiofrequency ablation. A mean of 5.0±1.2 cryoballoon applications was applied, and the left atrial dwell time was 76.7±22.4 minutes. The total MES counts/procedures were 522 (426-626). Left atrial access and Flexcath sheath insertion generated 25 (11-44) and 34 (24-53) MESs. Using radiofrequency ablation for transseptal access increased the MES count during transseptal punctures. During cryoapplications, MES counts were greatest during first applications (117 [81-157]), especially after balloon stretch/deflations (43 [21-81]). Pre- and post-pulmonary vein potential mapping with Lasso catheters generated 57 (21-88) and 61 (36-88) MESs. Reinsertion of once withdrawn cryoballoons and subsequent applications produced 205 (156-310) MESs. Touch-up ablation generated 32 (19-62) MESs, whereas electric cardioversion generated no MESs. SCEs and silent cerebral lesions were detected in 11 (32.3%) and 4 (11.7%) patients, respectively. The patients with SCEs were older than those without; however, there were no significant factors associated with SCEs. A significant number of MESs and SCE/silent cerebral lesion occurrences were observed during second-generation cryoballoon ablation procedures. MESs were recorded during a variety of steps throughout the procedure; however, the majority occurred during phases with a high probability of gaseous emboli. © 2017 American Heart Association, Inc.
Ortega, Isabel Cristina Muñoz; Valdivieso, Alher Mauricio Hernández; Lopez, Joan Francesc Alonso; Villanueva, Miguel Ángel Mañanas; Lopez, Luis Horacio Atehortúa
2017-01-01
Objective The aim of this pilot study was to evaluate the feasibility of surface electromyographic signal derived indexes for the prediction of weaning outcomes among mechanically ventilated subjects after cardiac surgery. Methods A sample of 10 postsurgical adult subjects who received cardiovascular surgery that did not meet the criteria for early extubation were included. Surface electromyographic signals from diaphragm and ventilatory variables were recorded during the weaning process, with the moment determined by the medical staff according to their expertise. Several indexes of respiratory muscle expenditure from surface electromyography using linear and non-linear processing techniques were evaluated. Two groups were compared: successfully and unsuccessfully weaned patients. Results The obtained indexes allow estimation of the diaphragm activity of each subject, showing a correlation between high expenditure and weaning test failure. Conclusion Surface electromyography is becoming a promising procedure for assessing the state of mechanically ventilated patients, even in complex situations such as those that involve a patient after cardiovascular surgery. PMID:28977261
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
Complex noise suppression using a sparse representation and 3D filtering of images
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.
2017-08-01
A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.
Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D
2013-01-01
Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.
A new OTDR based on probe frequency multiplexing
NASA Astrophysics Data System (ADS)
Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Xuping
2013-12-01
Two signal multiplexing methods are proposed and experimentally demonstrated in optical time domain reflectometry (OTDR) for fault location of optical fiber transmission line to obtain high measurement efficiency. Probe signal multiplexing is individually obtained by phase modulation for generation of multi-frequency and time sequential frequency probe pulses. The backscattered Rayleigh light of the multiplexing probe signals is transferred to corresponding heterodyne intermediate frequency (IF) through heterodyning with the single frequency local oscillator (LO). Then the IFs are simultaneously acquired by use of a data acquisition card (DAQ) with sampling rate of 100Msps, and the obtained data are processed by digital band pass filtering (BPF), digital down conversion (DDC) and digital low pass filtering (BPF) procedure. For each probe frequency of the detected signals, the extraction of the time domain reflecting signal power is performed by parallel computing method. For a comprehensive performance comparison with conventional coherent OTDR on the probe frequency multiplexing methods, the potential for enhancement of dynamic range, spatial resolution and measurement time are analyzed and discussed. Experimental results show that by use of the probe frequency multiplexing method, the measurement efficiency of coherent OTDR can be enhanced by nearly 40 times.
Rahimpour, M; Mohammadzadeh Asl, B
2016-07-01
Monitoring atrial activity via P waves, is an important feature of the arrhythmia detection procedure. The aim of this paper is to present an algorithm for P wave detection in normal and some abnormal records by improving existing methods in the field of signal processing. In contrast to the classical approaches, which are completely blind to signal dynamics, our proposed method uses the extended Kalman filter, EKF25, to estimate the state variables of the equations modeling the dynamic of an ECG signal. This method is a modified version of the nonlinear dynamical model previously introduced for a generation of synthetic ECG signals and fiducial point extraction in normal ones. It is capable of estimating the separate types of activity of the heart with reasonable accuracy and performs well in the presence of morphological variations in the waveforms and ectopic beats. The MIT-BIH Arrhythmia and QT databases have been used to evaluate the performance of the proposed method. The results show that this method has Se = 98.38% and Pr = 96.74% in the overall records (considering normal and abnormal rhythms).
Vibration sensing in smart machine rotors using internal MEMS accelerometers
NASA Astrophysics Data System (ADS)
Jiménez, Samuel; Cole, Matthew O. T.; Keogh, Patrick S.
2016-09-01
This paper presents a novel topology for enhanced vibration sensing in which wireless MEMS accelerometers embedded within a hollow rotor measure vibration in a synchronously rotating frame of reference. Theoretical relations between rotor-embedded accelerometer signals and the vibration of the rotor in an inertial reference frame are derived. It is thereby shown that functionality as a virtual stator-mounted displacement transducer can be achieved through appropriate signal processing. Experimental tests on a prototype rotor confirm that both magnitude and phase information of synchronous vibration can be measured directly without additional stator-mounted key-phasor sensors. Displacement amplitudes calculated from accelerometer signals will become erroneous at low rotational speeds due to accelerometer zero-g offsets, hence a corrective procedure is introduced. Impact tests are also undertaken to examine the ability of the internal accelerometers to measure transient vibration. A further capability is demonstrated, whereby the accelerometer signals are used to measure rotational speed of the rotor by analysing the signal component due to gravity. The study highlights the extended functionality afforded by internal accelerometers and demonstrates the feasibility of internal sensor topologies, which can provide improved observability of rotor vibration at externally inaccessible rotor locations.
Backscattering measuring system for optimization of intravenous laser irradiation dose
NASA Astrophysics Data System (ADS)
Rusina, Tatyana V.; Popov, V. D.; Melnik, Ivan S.; Dets, Sergiy M.
1996-11-01
Intravenous laser blood irradiation as an effective method of biostimulation and physiotherapy becomes a more popular procedure. Optimal irradiation conditions for each patient are needed to be established individually. A fiber optics feedback system combined with conventional intravenous laser irradiation system was developed to control of irradiation process. The system consists of He-Ne laser, fiber optics probe and signal analyzer. Intravenous blood irradiation was performed in 7 healthy volunteers and 19 patients with different diseases. Measurements in vivo were related to in vitro blood irradiation which was performed in the same conditions with force-circulated venous blood. Comparison of temporal variations of backscattered light during all irradiation procedures has shown a strong discrepancy on optical properties of blood in patients with various health disorders since second procedure. The best cure effect was achieved when intensity of backscattered light was constant during at least five minutes. As a result, the optical irradiation does was considered to be equal 20 minutes' exposure of 3 mW He-Ne laser light at the end of fourth procedure.
A Robust Kalman Framework with Resampling and Optimal Smoothing
Kautz, Thomas; Eskofier, Bjoern M.
2015-01-01
The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647
STATs: An Old Story, Yet Mesmerizing
Abroun, Saeid; Saki, Najmaldin; Ahmadvand, Mohammad; Asghari, Farahnaz; Salari, Fatemeh; Rahim, Fakher
2015-01-01
Signal transducers and activators of transcription (STATs) are cytoplasmic transcription factors that have a key role in cell fate. STATs, a protein family comprised of seven members, are proteins which are latent cytoplasmic transcription factors that convey signals from the cell surface to the nucleus through activation by cytokines and growth factors. The signaling pathways have diverse biological functions that include roles in cell differentiation, proliferation, development, apoptosis, and inflammation which place them at the center of a very active area of research. In this review we explain Janus kinase (JAK)/STAT signaling and focus on STAT3, which is transient from cytoplasm to nucleus after phosphorylation. This procedure controls fundamental biological processes by regulating nuclear genes controlling cell proliferation, survival, and development. In some hematopoietic disorders and cancers, overexpression and activation of STAT3 result in high proliferation, suppression of cell differentiation and inhibition of cell maturation. This article focuses on STAT3 and its role in malignancy, in addition to the role of microRNAs (miRNAs) on STAT3 activation in certain cancers. PMID:26464811
NASA Technical Reports Server (NTRS)
Rino, C. L.; Livingston, R. C.; Whitney, H. E.
1976-01-01
This paper presents an analysis of ionospheric scintillation data which shows that the underlying statistical structure of the signal can be accurately modeled by the additive complex Gaussian perturbation predicted by the Born approximation in conjunction with an application of the central limit theorem. By making use of this fact, it is possible to estimate the in-phase, phase quadrature, and cophased scattered power by curve fitting to measured intensity histograms. By using this procedure, it is found that typically more than 80% of the scattered power is in phase quadrature with the undeviated signal component. Thus, the signal is modeled by a Gaussian, but highly non-Rician process. From simultaneous UHF and VHF data, only a weak dependence of this statistical structure on changes in the Fresnel radius is deduced. The signal variance is found to have a nonquadratic wavelength dependence. It is hypothesized that this latter effect is a subtle manifestation of locally homogeneous irregularity structures, a mathematical model proposed by Kolmogorov (1941) in his early studies of incompressible fluid turbulence.
Improving Small Signal Stability through Operating Point Adjustment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.
2010-09-30
ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effectmore » of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.« less
Surgical evaluation of candidates for cochlear implants
NASA Technical Reports Server (NTRS)
Black, F. O.; Lilly, D. J.; Fowler, L. P.; Stypulkowski, P. H.
1987-01-01
The customary presentation of surgical procedures to patients in the United States consists of discussions on alternative treatment methods, risks of the procedure(s) under consideration, and potential benefits for the patient. Because the contents of the normal speech signal have not been defined in a way that permits a surgeon systematically to provide alternative auditory signals to a deaf patient, the burden is placed on the surgeon to make an arbitrary selection of candidates and available devices for cochlear prosthetic implantation. In an attempt to obtain some information regarding the ability of a deaf patient to use electrical signals to detect and understand speech, the Good Samaritan Hospital and Neurological Sciences Institute cochlear implant team has routinely performed tympanotomies using local anesthesia and has positioned temporary electrodes onto the round windows of implant candidates. The purpose of this paper is to review our experience with this procedure and to provide some observations that may be useful in a comprehensive preoperative evaluation for totally deaf patients who are being considered for cochlear implantation.
Real-time implementation of second generation of audio multilevel information coding
NASA Astrophysics Data System (ADS)
Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.
1994-03-01
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
NASA Technical Reports Server (NTRS)
Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.
1981-01-01
The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Data reduction procedures for traffic signal systems performance measures.
DOT National Transportation Integrated Search
2011-01-01
Traffic signal systems represent a substantial component of the highway transportation network in the United States. It is challenging for most agencies to find engineering resources to properly update signal policies and timing plans to accommodate ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz; Babak, Stanislav; Krolak, Andrzej
We present data analysis methods used in the detection and estimation of parameters of gravitational-wave signals from the white dwarf binaries in the mock LISA data challenge. Our main focus is on the analysis of challenge 3.1, where the gravitational-wave signals from more than 6x10{sup 7} Galactic binaries were added to the simulated Gaussian instrumental noise. The majority of the signals at low frequencies are not resolved individually. The confusion between the signals is strongly reduced at frequencies above 5 mHz. Our basic data analysis procedure is the maximum likelihood detection method. We filter the data through the template bankmore » at the first step of the search, then we refine parameters using the Nelder-Mead algorithm, we remove the strongest signal found and we repeat the procedure. We detect reliably and estimate parameters accurately of more than ten thousand signals from white dwarf binaries.« less
Measurement of splanchnic photoplethysmographic signals using a new reflectance fiber optic sensor
NASA Astrophysics Data System (ADS)
Hickey, Michelle; Samuels, Neal; Randive, Nilesh; Langford, Richard M.; Kyriacou, Panayiotis A.
2010-03-01
Splanchnic organs are particularly vulnerable to hypoperfusion. Currently, there is no technique that allows for the continuous estimation of splanchnic blood oxygen saturation (SpO2). As a preliminary to developing a suitable splanchnic SpO2 sensor, a new reflectance fiber optic photoplethysmographic (PPG) sensor and processing system are developed. An experimental procedure to examine the effect of fiber source detector separation distance on acquired PPG signals is carried out before finalizing the sensor design. PPG signals are acquired from four volunteers for separation distances of 1 to 8 mm. The separation range of 3 to 6 mm provides the best quality PPG signals with large amplitudes and the highest signal-to-noise ratios (SNRs). Preliminary calculation of SpO2 shows that distances of 3 and 4 mm provide the most realistic values. Therefore, it is suggested that the separation distance in the design of a fiber optic reflectance pulse oximeter be in the range of 3 to 4 mm. Preliminary PPG signals from various splanchnic organs and the periphery are obtained from six anaesthetized patients. The normalized amplitudes of the splanchnic PPGs are, on average, approximately the same as those obtained simultaneously from the periphery. These observations suggest that fiber optic pulse oximetry may be a valid monitoring technique for splanchnic organs.
ERIC Educational Resources Information Center
Hudon, Carol; Belleville, Sylvie; Gauthier, Serge
2009-01-01
This study used the Remember/Know (R/K) procedure combined with signal detection analyses to assess recognition memory in 20 elders with amnestic mild cognitive impairment (aMCI), 10 patients with probable Alzheimer's disease (AD) as well as matched healthy older adults. Signal detection analyses first indicated that aMCI and control participants…
30 CFR 56.14219 - Brakeman signals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Brakeman signals. 56.14219 Section 56.14219... Safety Practices and Operational Procedures § 56.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 56.14219 - Brakeman signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Brakeman signals. 56.14219 Section 56.14219... Safety Practices and Operational Procedures § 56.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 57.14219 - Brakeman signals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Brakeman signals. 57.14219 Section 57.14219... Equipment Safety Practices and Operational Procedures § 57.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 57.14219 - Brakeman signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Brakeman signals. 57.14219 Section 57.14219... Equipment Safety Practices and Operational Procedures § 57.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
DOT National Transportation Integrated Search
1968-10-01
Signal lights are presented to an observer as flashes with finite duration; thus, the effect of flash duration on the apparent brightness of the signal is important. The relation of effective signal brightness to flash duration and luminance finds ex...
30 CFR 56.14219 - Brakeman signals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Brakeman signals. 56.14219 Section 56.14219... Safety Practices and Operational Procedures § 56.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 56.14219 - Brakeman signals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Brakeman signals. 56.14219 Section 56.14219... Safety Practices and Operational Procedures § 56.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 56.14219 - Brakeman signals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Brakeman signals. 56.14219 Section 56.14219... Safety Practices and Operational Procedures § 56.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 57.14219 - Brakeman signals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Brakeman signals. 57.14219 Section 57.14219... Equipment Safety Practices and Operational Procedures § 57.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 57.14219 - Brakeman signals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Brakeman signals. 57.14219 Section 57.14219... Equipment Safety Practices and Operational Procedures § 57.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
30 CFR 57.14219 - Brakeman signals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Brakeman signals. 57.14219 Section 57.14219... Equipment Safety Practices and Operational Procedures § 57.14219 Brakeman signals. When a train is under the direction of a brakeman and the train operator cannot clearly recognize the brakeman's signals, the train...
Concepts for on-board satellite image registration, volume 1
NASA Technical Reports Server (NTRS)
Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.
1980-01-01
The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.
Signal-to-noise ratio application to seismic marker analysis and fracture detection
NASA Astrophysics Data System (ADS)
Xu, Hui-Qun; Gui, Zhi-Xian
2014-03-01
Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.
Real-time, in situ monitoring of nanoporation using electric field-induced acoustic signal
NASA Astrophysics Data System (ADS)
Zarafshani, Ali; Faiz, Rowzat; Samant, Pratik; Zheng, Bin; Xiang, Liangzhong
2018-02-01
The use of nanoporation in reversible or irreversible electroporation, e.g. cancer ablation, is rapidly growing. This technique uses an ultra-short and intense electric pulse to increase the membrane permeability, allowing non-permeant drugs and genes access to the cytosol via nanopores in the plasma membrane. It is vital to create a real-time in situ monitoring technique to characterize this process and answer the need created by the successful electroporation procedure of cancer treatment. All suggested monitoring techniques for electroporation currently are for pre-and post-stimulation exposure with no real-time monitoring during electric field exposure. This study was aimed at developing an innovative technology for real-time in situ monitoring of electroporation based on the typical cell exposure-induced acoustic emissions. The acoustic signals are the result of the electric field, which itself can be used in realtime to characterize the process of electroporation. We varied electric field distribution by varying the electric pulse from 1μ - 100ns and varying the voltage intensity from 0 - 1.2ܸ݇ to energize two electrodes in a bi-polar set-up. An ultrasound transducer was used for collecting acoustic signals around the subject under test. We determined the relative location of the acoustic signals by varying the position of the electrodes relative to the transducer and varying the electric field distribution between the electrodes to capture a variety of acoustic signals. Therefore, the electric field that is utilized in the nanoporation technique also produces a series of corresponding acoustic signals. This offers a novel imaging technique for the real-time in situ monitoring of electroporation that may directly improve treatment efficiency.
Improving the geological interpretation of magnetic and gravity satellite anomalies
NASA Technical Reports Server (NTRS)
Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.
1987-01-01
Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.
Plume interference with space shuttle range safety signals
NASA Technical Reports Server (NTRS)
Boynton, F. P.; Rajaseknar, P. S.
1979-01-01
The computational procedure for signal propagation in the presence of an exhaust plume is presented. Comparisons with well-known analytic diffraction solutions indicate that accuracy suffers when mesh spacing is inadequate to resolve the first unobstructed Fresnel zone at the plume edge. Revisions to the procedure to improve its accuracy without requiring very large arrays are discussed. Comparisons to field measurements during a shuttle solid rocket motor (SRM) test firing suggest that the plume is sharper edged than one would expect on the basis of time averaged electron density calculations. The effects, both of revisions to the computational procedure and of allowing for a sharper plume edge, are to raise the signal level near tail aspect. The attenuation levels then predicted are still high enough to be of concern near SRM burnout for northerly launches of the space shuttle.
Subconscious detection of threat as reflected by an enhanced response bias.
Windmann, S; Krüger, T
1998-12-01
Neurobiological and cognitive models of unconscious information processing suggest that subconscious threat detection can lead to cognitive misinterpretations and false alarms, while conscious processing is assumed to be perceptually and conceptually accurate and unambiguous. Furthermore, clinical theories suggest that pathological anxiety results from a crude preattentive warning system predominating over more sophisticated and controlled modes of processing. We investigated the hypothesis that subconscious detection of threat in a cognitive task is reflected by enhanced "false signal" detection rather than by selectively enhanced discrimination of threat items in 30 patients with panic disorder and 30 healthy controls. We presented a tachistoscopic word-nonword discrimination task and a subsequent recognition task and analyzed the data by means of process-dissociation procedures. In line with our expectations, subjects of both groups showed more false signal detection to threat than to neutral stimuli as indicated by an enhanced response bias, whereas indices of discriminative sensitivity did not show this effect. In addition, patients with panic disorder showed a generally enhanced response bias in comparison to healthy controls. They also seemed to have processed the stimuli less elaborately and less differentially. Results are consistent with the assumption that subconscious threat detection can lead to misrepresentations of stimulus significance and that pathological anxiety is characterized by a hyperactive preattentive alarm system that is insufficiently controlled by higher cognitive processes. Copyright 1998 Academic Press.
Optimal spacing between transmitting and receiving optical fibres in reflectance pulse oximetry
NASA Astrophysics Data System (ADS)
Hickey, M.; Kyriacou, P. A.
2007-10-01
Splanchnic ischaemia can ultimately lead to cellular hypoxia and necrosis, and may well contribute to the development of multiple organ failures and increased mortality. Therefore, it is of utmost importance to monitor abdominal organ blood oxygen saturation (SpO2). Pulse oximetry has been widely accepted as a reliable method for monitoring oxygen saturation of arterial blood. Animal studies have also shown it to be effective in the monitoring of blood oxygen saturation in the splanchnic region. However, commercially available pulse oximeter probes are not suitable for the continuous assessment of SpO2 in the splanchnic region. Therefore, there is a need for a new sensor technology that will allow the continuous measurement of SpO2 in the splanchnic area pre-operatively, operatively and post-operatively. For this purpose, a new fibre optic sensor and processing system utilising the principle of reflectance pulse oximetry has been developed. The accuracy in the estimation of SpO2 in pulse oximetry depends on the quality and amplitude of the photoplethysmographic (PPG) signal and for this reason an experimental procedure was carried out to examine the effect of the source-detector separation distance on the acquired PPG signals, and to ultimately select an optimal separation for the final design of the fibre-optic probe. PPG signals were obtained from the finger for different separation distances between the emitting and detecting fibres. Good quality PPG signals with large amplitudes and high signal-to-noise ratio were detected in the range of 3mm to 6mm. At separation distances between 1mm and 2mm, PPG signals were erratic with no resemblance to a conventional PPG signal. At separation distances greater than 6mm, the amplitudes of PPG signals were very small and not appropriate for processing. This investigation indicates the suitability of optical fibres as a new pulse oximetry sensor for estimating blood oxygen saturation (SpO2) in the splanchnic region.
Bioactive Molecule Delivery Systems for Dentin-pulp Tissue Engineering.
Shrestha, Suja; Kishen, Anil
2017-05-01
Regenerative endodontic procedures use bioactive molecules (BMs), which are active signaling molecules that initiate and maintain cell responses and interactions. When applied in a bolus form, they may undergo rapid diffusion and denaturation resulting in failure to induce the desired effects on target cells. The controlled release of BMs from a biomaterial carrier is expected to enhance and accelerate functional tissue engineering during regenerative endodontic procedures. This narrative review presents a comprehensive review of different polymeric BM release strategies with relevance to dentin-pulp engineering. Carrier systems designed to allow the preprogrammed release of BMs in a spatial- and temporal-controlled manner would aid in mimicking the natural wound healing process while overcoming some of the challenges faced in clinical translation of regenerative endodontic procedures. Spatial- and temporal-controlled BM release systems have become an exciting option in dentin-pulp tissue engineering; nonetheless, further validation of this concept and knowledge is required for their potential clinical translation. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Chambers, Gloria T.; Meyer, Walter J.; Arceneaux, Lisa L.; Russell, William J.; Seibel, Eric J.; Richards, Todd L.; Sharar, Sam R.; Patterson, David R.
2015-01-01
Introduction Excessive pain during medical procedures is a widespread problem but is especially problematic during daily wound care of patients with severe burn injuries. Methods Burn patients report 35–50% reductions in procedural pain while in a distracting immersive virtual reality, and fMRI brain scans show associated reductions in pain-related brain activity during VR. VR distraction appears to be most effective for patients with the highest pain intensity levels. VR is thought to reduce pain by directing patients’ attention into the virtual world, leaving less attention available to process incoming neural signals from pain receptors. Conclusions We review evidence from clinical and laboratory research studies exploring Virtual Reality analgesia, concentrating primarily on the work ongoing within our group. We briefly describe how VR pain distraction systems have been tailored to the unique needs of burn patients to date, and speculate about how VR systems could be tailored to the needs of other patient populations in the future. PMID:21264690
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2016-08-01
This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.
Lenggenhager, Bigna; Azevedo, Ruben T; Mancini, Alessandra; Aglioti, Salvatore Maria
2013-10-01
The ultimatum game (UG) is commonly used to study the tension between financial self-interest and social equity motives. Here, we investigated whether experimental exposure to interoceptive signals influences participants' behavior in the UG. Participants were presented with various bodily sounds--i.e., their own heart, another person's heart, or the sound of footsteps--while acting both in the role of responder and proposer. We found that listening to one's own heart sound, compared to the other bodily sounds: (1) increased subjective feelings of unfairness, but not rejection behavior, in response to unfair offers and (2) increased the unfair offers while playing in the proposer role. These findings suggest that heightened feedback of one's own visceral processes may increase a self-centered perspective and drive socioeconomic exchanges accordingly. In addition, this study introduces a valuable procedure to manipulate online the access to interoceptive signals and for exploring the interplay between viscero-sensory information and cognition.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... signal measurement procedures include provisions for the location of the measurement antenna, antenna height, signal measurement method, antenna orientation and polarization, and data recording. 2. On... in the SHVERA NPRM: (1) Which station signals are to be measured; (2) what type of antenna is to be...
Alternative method for determining the constant offset in lidar signal
Vladimir A. Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao
2009-01-01
We present an alternative method for determining the total offset in lidar signal created by a daytime background-illumination component and electrical or digital offset. Unlike existing techniques, here the signal square-range-correction procedure is initially performed using the total signal recorded by lidar, without subtraction of the offset component. While...
Image Descriptors for Displays
1975-03-01
sampled with composite blanking signal; (c) signal in (a) formed into composite video signal ... 24 3. Power spectral density of the signals shown in...Curve A: composite video signal formed from 20 Hz to 2.5 MH.i band-limited, Gaussian white noise. Curve B: average spectrum of off-the-air video...previously. Our experimental procedure was the following. Off-the-air television signals broadcast on VHP channels were analyzed with a commercially
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method
NASA Astrophysics Data System (ADS)
Kania, Dariusz
2017-06-01
The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio
2016-04-01
Objective. Medical intractable epilepsy is a common condition that affects 40% of epileptic patients that generally have to undergo resective surgery. Magnetoencephalography (MEG) has been increasingly used to identify the epileptogenic foci through equivalent current dipole (ECD) modeling, one of the most accepted methods to obtain an accurate localization of interictal epileptiform discharges (IEDs). Modeling requires that MEG signals are adequately preprocessed to reduce interferences, a task that has been greatly improved by the use of blind source separation (BSS) methods. MEG recordings are highly sensitive to metallic interferences originated inside the head by implanted intracranial electrodes, dental prosthesis, etc and also coming from external sources such as pacemakers or vagal stimulators. To reduce these artifacts, a BSS-based fully automatic procedure was recently developed and validated, showing an effective reduction of metallic artifacts in simulated and real signals (Migliorelli et al 2015 J. Neural Eng. 12 046001). The main objective of this study was to evaluate its effects in the detection of IEDs and ECD modeling of patients with focal epilepsy and metallic interference. Approach. A comparison between the resulting positions of ECDs was performed: without removing metallic interference; rejecting only channels with large metallic artifacts; and after BSS-based reduction. Measures of dispersion and distance of ECDs were defined to analyze the results. Main results. The relationship between the artifact-to-signal ratio and ECD fitting showed that higher values of metallic interference produced highly scattered dipoles. Results revealed a significant reduction on dispersion using the BSS-based reduction procedure, yielding feasible locations of ECDs in contrast to the other two approaches. Significance. The automatic BSS-based method can be applied to MEG datasets affected by metallic artifacts as a processing step to improve the localization of epileptic foci.
Quantized correlation coefficient for measuring reproducibility of ChIP-chip data.
Peng, Shouyong; Kuroda, Mitzi I; Park, Peter J
2010-07-27
Chromatin immunoprecipitation followed by microarray hybridization (ChIP-chip) is used to study protein-DNA interactions and histone modifications on a genome-scale. To ensure data quality, these experiments are usually performed in replicates, and a correlation coefficient between replicates is used often to assess reproducibility. However, the correlation coefficient can be misleading because it is affected not only by the reproducibility of the signal but also by the amount of binding signal present in the data. We develop the Quantized correlation coefficient (QCC) that is much less dependent on the amount of signal. This involves discretization of data into set of quantiles (quantization), a merging procedure to group the background probes, and recalculation of the Pearson correlation coefficient. This procedure reduces the influence of the background noise on the statistic, which then properly focuses more on the reproducibility of the signal. The performance of this procedure is tested in both simulated and real ChIP-chip data. For replicates with different levels of enrichment over background and coverage, we find that QCC reflects reproducibility more accurately and is more robust than the standard Pearson or Spearman correlation coefficients. The quantization and the merging procedure can also suggest a proper quantile threshold for separating signal from background for further analysis. To measure reproducibility of ChIP-chip data correctly, a correlation coefficient that is robust to the amount of signal present should be used. QCC is one such measure. The QCC statistic can also be applied in a variety of other contexts for measuring reproducibility, including analysis of array CGH data for DNA copy number and gene expression data.
49 CFR 220.51 - Radio communications and signal indications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Radio communications and signal indications. 220... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD COMMUNICATIONS Radio and Wireless Communication Procedures § 220.51 Radio communications and signal indications. (a) No information may be given...
49 CFR 220.51 - Radio communications and signal indications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD COMMUNICATIONS Radio and Wireless Communication Procedures § 220.51 Radio communications and signal indications. (a) No information may be given... 49 Transportation 4 2013-10-01 2013-10-01 false Radio communications and signal indications. 220...
49 CFR 220.51 - Radio communications and signal indications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD COMMUNICATIONS Radio and Wireless Communication Procedures § 220.51 Radio communications and signal indications. (a) No information may be given... 49 Transportation 4 2014-10-01 2014-10-01 false Radio communications and signal indications. 220...
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
Minimum depth of investigation for grounded-wire TEM due to self-transients
NASA Astrophysics Data System (ADS)
Zhou, Nannan; Xue, Guoqiang
2018-05-01
The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.
NASA Astrophysics Data System (ADS)
Meyer, F. J.; McAlpin, D. B.; Gong, W.; Ajadi, O.; Arko, S.; Webley, P. W.; Dehn, J.
2015-02-01
Remote sensing plays a critical role in operational volcano monitoring due to the often remote locations of volcanic systems and the large spatial extent of potential eruption pre-cursor signals. Despite the all-weather capabilities of radar remote sensing and its high performance in monitoring of change, the contribution of radar data to operational monitoring activities has been limited in the past. This is largely due to: (1) the high costs associated with radar data; (2) traditionally slow data processing and delivery procedures; and (3) the limited temporal sampling provided by spaceborne radars. With this paper, we present new data processing and data integration techniques that mitigate some of these limitations and allow for a meaningful integration of radar data into operational volcano monitoring decision support systems. Specifically, we present fast data access procedures as well as new approaches to multi-track processing that improve near real-time data access and temporal sampling of volcanic systems with SAR data. We introduce phase-based (coherent) and amplitude-based (incoherent) change detection procedures that are able to extract dense time series of hazard information from these data. For a demonstration, we present an integration of our processing system with an operational volcano monitoring system that was developed for use by the Alaska Volcano Observatory (AVO). Through an application to a historic eruption, we show that the integration of SAR into systems such as AVO can significantly improve the ability of operational systems to detect eruptive precursors. Therefore, the developed technology is expected to improve operational hazard detection, alerting, and management capabilities.
Malaei, Reyhane; Ramezani, Amir M; Absalan, Ghodratollah
2018-05-04
A sensitive and reliable ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) procedure was developed and validated for extraction and analysis of malondialdehyde (MDA) as an important lipids-peroxidation biomarker in human plasma. In this methodology, to achieve an applicable extraction procedure, the whole optimization processes were performed in human plasma. To convert MDA into readily extractable species, it was derivatized to hydrazone structure-base by 2,4-dinitrophenylhydrazine (DNPH) at 40 °C within 60 min. Influences of experimental variables on the extraction process including type and volume of extraction and disperser solvents, amount of derivatization agent, temperature, pH, ionic strength, sonication and centrifugation times were evaluated. Under the optimal experimental conditions, the enhancement factor and extraction recovery were 79.8 and 95.8%, respectively. The analytical signal linearly (R 2 = 0.9988) responded over a concentration range of 5.00-4000 ng mL -1 with a limit of detection of 0.75 ng mL -1 (S/N = 3) in the plasma sample. To validate the developed procedure, the recommend guidelines of Food and Drug Administration for bioanalytical analysis have been employed. Copyright © 2018. Published by Elsevier B.V.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Adaptive identification and control of structural dynamics systems using recursive lattice filters
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Montgomery, R. C.; Williams, J. P.
1985-01-01
A new approach for adaptive identification and control of structural dynamic systems by using least squares lattice filters thar are widely used in the signal processing area is presented. Testing procedures for interfacing the lattice filter identification methods and modal control method for stable closed loop adaptive control are presented. The methods are illustrated for a free-free beam and for a complex flexible grid, with the basic control objective being vibration suppression. The approach is validated by using both simulations and experimental facilities available at the Langley Research Center.
The UCD/FLWO extensive air shower array at Mt. Hopkins Arizona
NASA Astrophysics Data System (ADS)
Gillanders, G. H.; Fegan, D. J.; McKeown, P. K.; Weekes, T. C.
The design and operation of an extensive air shower (EAS) array being installed around the 10-m optical Cerenkov reflector at F.L. Whipple Observatory on Mt. Hopkins for high-energy gamma-ray astronomy are described. The advantages of an EAS array colocated with a Cerenkov facility at a mountain location are reviewed; the arrangement of the 13 1-sq m scintillation detectors in the array is indicated; the signal-processing and data-acquisition procedures are explained; and preliminary calibration data indicating an effective energy threshold of 60 TeV are presented.
Masoudi, Ali; Newson, Trevor P
2017-01-15
A distributed optical fiber dynamic strain sensor with high spatial and frequency resolution is demonstrated. The sensor, which uses the ϕ-OTDR interrogation technique, exhibited a higher sensitivity thanks to an improved optical arrangement and a new signal processing procedure. The proposed sensing system is capable of fully quantifying multiple dynamic perturbations along a 5 km long sensing fiber with a frequency and spatial resolution of 5 Hz and 50 cm, respectively. The strain resolution of the sensor was measured to be 40 nε.
Investigation of Doppler spectra of laser radiation scattered inside hand skin during occlusion test
NASA Astrophysics Data System (ADS)
Kozlov, I. O.; Zherebtsov, E. A.; Zherebtsova, A. I.; Dremin, V. V.; Dunaev, A. V.
2017-11-01
Laser Doppler flowmetry (LDF) is a method widely used in diagnosis of microcirculation diseases. It is well known that information about frequency distribution of Doppler spectrum of the laser radiation scattered by moving red blood cells (RBC) usually disappears after signal processing procedure. Photocurrent’s spectrum distribution contains valuable diagnostic information about velocity distribution of the RBC. In this research it is proposed to compute the indexes of microcirculation in the sub-ranges of the Doppler spectrum as well as investigate the frequency distribution of the computed indexes.
Simple Psychological Interventions for Reducing Pain From Common Needle Procedures in Adults
Boerner, Katelynn E.; Birnie, Kathryn A.; Taddio, Anna; McMurtry, C. Meghan; Noel, Melanie; Shah, Vibhuti; Pillai Riddell, Rebecca
2015-01-01
Background: This systematic review evaluated the effectiveness of simple psychological interventions for managing pain and fear in adults undergoing vaccination or related common needle procedures (ie, venipuncture/venous cannulation). Design/Methods: Databases were searched to identify relevant randomized and quasi-randomized controlled trials. Self-reported pain and fear were prioritized as critically important outcomes. Data were combined using standardized mean difference (SMD) or relative risk (RR) with 95% confidence intervals (CI). Results: No studies involving vaccination met inclusion criteria; evidence was drawn from 8 studies of other common needle procedures (eg, venous cannulation, venipuncture) in adults. Two trials evaluating the impact of neutral signaling of the impending procedure (eg, “ready?”) as compared with signaling of impending pain (eg, “sharp scratch”) demonstrated lower pain when signaled about the procedure (n=199): SMD=−0.97 (95% CI, −1.26, −0.68), after removal of 1 trial where self-reported pain was significantly lower than the other 2 included trials. Two trials evaluated music distraction (n=156) and demonstrated no difference in pain: SMD=0.10 (95% CI, −0.48, 0.27), or fear: SMD=−0.25 (95% CI, −0.61, 0.10). Two trials evaluated visual distraction and demonstrated no difference in pain (n=177): SMD=−0.57 (95% CI, −1.82, 0.68), or fear (n=81): SMD=−0.05 (95% CI, −0.50, 0.40). Two trials evaluating breathing interventions found less pain in intervention groups (n=138): SMD=−0.82 (95% CI, −1.21, −0.43). The quality of evidence across all trials was very low. Conclusions: There are no published studies of simple psychological interventions for vaccination pain in adults. There is some evidence of a benefit from other needle procedures for breathing strategies and neutral signaling of the start of the procedure. There is no evidence for use of music or visual distraction. PMID:26352921
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
A disassembly-free method for evaluation of spiral bevel gear assembly
NASA Astrophysics Data System (ADS)
Jedliński, Łukasz; Jonak, Józef
2017-05-01
The paper presents a novel method for evaluation of assembly of spiral bevel gears. The examination of the approaches to the problem of gear control diagnostics without disassembly has revealed that residual processes in the form of vibrations (or noise) are currently the most suitable to this end. According to the literature, contact pattern is a complex parameter for describing gear position. Therefore, the task is to determine the correlation between contact pattern and gear vibrations. Although the vibration signal contains a great deal of information, it also has a complex spectral structure and contains interferences. For this reason, the proposed method has three variants which determine the effect of preliminary processing of the signal on the results. In Variant 2, stage 1, the vibration signal is subjected to multichannel denoising using a wavelet transform (WT), and in Variant 3 - to a combination of WT and principal component analysis (PCA). This denoising procedure does not occur in Variant 1. Next, we determine the features of the vibration signal in order to focus on information which is crucial regarding the objective of the study. Given the lack of unequivocal premises enabling selection of optimum features, we calculate twenty features, rank them and finally select the appropriate ones using an algorithm. Diagnostic rules were created using artificial neural networks. We investigated the suitability of three network types: multilayer perceptron (MLP), radial basis function (RBF) and support vector machine (SVM).
PepsNMR for 1H NMR metabolomic data pre-processing.
Martin, Manon; Legat, Benoît; Leenders, Justine; Vanwinsberghe, Julien; Rousseau, Réjane; Boulanger, Bruno; Eilers, Paul H C; De Tullio, Pascal; Govaerts, Bernadette
2018-08-17
In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1 H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines. Copyright © 2018 Elsevier B.V. All rights reserved.
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.
Peña, Edsel A; Habiger, Joshua D; Wu, Wensong
2011-02-01
Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.
Method for laser spot welding monitoring
NASA Astrophysics Data System (ADS)
Manassero, Giorgio
1994-09-01
As more powerful solid state laser sources appear on the market, new applications become technically possible and important from the economical point of view. For every process a preliminary optimization phase is necessary. The main parameters, used for a welding application by a high power Nd-YAG laser, are: pulse energy, pulse width, repetition rate and process duration or speed. In this paper an experimental methodology, for the development of an electrooptical laser spot welding monitoring system, is presented. The electromagnetic emission from the molten pool was observed and measured with appropriate sensors. The statistical method `Parameter Design' was used to obtain an accurate analysis of the process parameter that influence process results. A laser station with a solid state laser coupled to an optical fiber (1 mm in diameter) was utilized for the welding tests. The main material used for the experimental plan was zinc coated steel sheet 0.8 mm thick. This material and the related spot welding technique are extensively used in the automotive industry, therefore, the introduction of laser technology in production line will improve the quality of the final product. A correlation, between sensor signals and `through or not through' welds, was assessed. The investigation has furthermore shown the necessity, for the modern laser production systems, to use multisensor heads for process monitoring or control with more advanced signal elaboration procedures.
Estimation of Fine and Oversize Particle Ratio in a Heterogeneous Compound with Acoustic Emissions.
Nsugbe, Ejay; Ruiz-Carcel, Cristobal; Starr, Andrew; Jennions, Ian
2018-03-13
The final phase of powder production typically involves a mixing process where all of the particles are combined and agglomerated with a binder to form a single compound. The traditional means of inspecting the physical properties of the final product involves an inspection of the particle sizes using an offline sieving and weighing process. The main downside of this technique, in addition to being an offline-only measurement procedure, is its inability to characterise large agglomerates of powders due to sieve blockage. This work assesses the feasibility of a real-time monitoring approach using a benchtop test rig and a prototype acoustic-based measurement approach to provide information that can be correlated to product quality and provide the opportunity for future process optimisation. Acoustic emission (AE) was chosen as the sensing method due to its low cost, simple setup process, and ease of implementation. The performance of the proposed method was assessed in a series of experiments where the offline quality check results were compared to the AE-based real-time estimations using data acquired from a benchtop powder free flow rig. A designed time domain based signal processing method was used to extract particle size information from the acquired AE signal and the results show that this technique is capable of estimating the required ratio in the washing powder compound with an average absolute error of 6%.
General anesthesia selectively disrupts astrocyte calcium signaling in the awake mouse cortex
Thrane, Alexander Stanley; Zeppenfeld, Douglas; Lou, Nanhong; Xu, Qiwu; Nagelhus, Erlend Arnulf; Nedergaard, Maiken
2012-01-01
Calcium signaling represents the principle pathway by which astrocytes respond to neuronal activity. General anesthetics are routinely used in clinical practice to induce a sleep-like state, allowing otherwise painful procedures to be performed. Anesthetic drugs are thought to mainly target neurons in the brain and act by suppressing synaptic activity. However, the direct effect of general anesthesia on astrocyte signaling in awake animals has not previously been addressed. This is a critical issue, because calcium signaling may represent an essential mechanism through which astrocytes can modulate synaptic activity. In our study, we performed calcium imaging in awake head-restrained mice and found that three commonly used anesthetic combinations (ketamine/xylazine, isoflurane, and urethane) markedly suppressed calcium transients in neocortical astrocytes. Additionally, all three anesthetics masked potentially important features of the astrocyte calcium signals, such as synchronized widespread transients that appeared to be associated with arousal in awake animals. Notably, anesthesia affected calcium transients in both processes and soma and depressed spontaneous signals, as well as calcium responses, evoked by whisker stimulation or agonist application. We show that these calcium transients are inositol 1,4,5-triphosphate type 2 receptor (IP3R2)-dependent but resistant to a local blockade of glutamatergic or purinergic signaling. Finally, we found that doses of anesthesia insufficient to affect neuronal responses to whisker stimulation selectively suppressed astrocyte calcium signals. Taken together, these data suggest that general anesthesia may suppress astrocyte calcium signals independently of neuronal activity. We propose that these glial effects may constitute a nonneuronal mechanism for sedative action of anesthetic drugs. PMID:23112168
Real-time inspection by submarine images
NASA Astrophysics Data System (ADS)
Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe
1996-10-01
A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.
NASA Astrophysics Data System (ADS)
Wang, Kaiyu; Zhang, Zhiyong; Ding, Xiaoyan; Tian, Fang; Huang, Yuqing; Chen, Zhong; Fu, Riqiang
2018-02-01
The feasibility of using the spin-echo based diagonal peak suppression method in solid-state MAS NMR homonuclear chemical shift correlation experiments is demonstrated. A complete phase cycling is designed in such a way that in the indirect dimension only the spin diffused signals are evolved, while all signals not involved in polarization transfer are refocused for cancellation. A data processing procedure is further introduced to reconstruct this acquired spectrum into a conventional two-dimensional homonuclear chemical shift correlation spectrum. A uniformly 13C, 15N labeled Fmoc-valine sample and the transmembrane domain of a human protein, LR11 (sorLA), in native Escherichia coli membranes have been used to illustrate the capability of the proposed method in comparison with standard 13C-13C chemical shift correlation experiments.
Multistrip western blotting to increase quantitative data output.
Kiyatkin, Anatoly; Aksamitiene, Edita
2009-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun
2017-11-01
The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
Specifications of CCITT Signalling System Number 7.
1981-05-01
signalling information destined for thc now accessible signalling point. (3854) - 136 - AP VII-No. 18-F 10 Signalling link management 10.1 Generai...18-E The transfer-prohibited procedure makes use of the transfer-prohibite2 message and of thc transfer-prohibited-acknowledgement message which...APBBEVIATI’)N7 US11 IN ~ A:%1~ CB.A - Changeback-acknowleligement. siift CBD - Changeback-declaratior, signalI CR14 - Changeover and changeback messai-f CNP
Blackout detection as a multiobjective optimization problem.
Chaudhary, A M; Trachtenberg, E A
1991-01-01
We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.
Wavelet-based characterization of gait signal for neurological abnormalities.
Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S
2015-02-01
Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques
NASA Technical Reports Server (NTRS)
Thompson, David E.; Thirumalainambi, Rajkumar
2006-01-01
This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.
Rodríguez Chialanza, Mauricio; Sierra, Ignacio; Pérez Parada, Andrés; Fornaro, Laura
2018-06-01
There are several techniques used to analyze microplastics. These are often based on a combination of visual and spectroscopic techniques. Here we introduce an alternative workflow for identification and mass quantitation through a combination of optical microscopy with image analysis (IA) and differential scanning calorimetry (DSC). We studied four synthetic polymers with environmental concern: low and high density polyethylene (LDPE and HDPE, respectively), polypropylene (PP), and polyethylene terephthalate (PET). Selected experiments were conducted to investigate (i) particle characterization and counting procedures based on image analysis with open-source software, (ii) chemical identification of microplastics based on DSC signal processing, (iii) dependence of particle size on DSC signal, and (iv) quantitation of microplastics mass based on DSC signal. We describe the potential and limitations of these techniques to increase reliability for microplastic analysis. Particle size demonstrated to have particular incidence in the qualitative and quantitative performance of DSC signals. Both, identification (based on characteristic onset temperature) and mass quantitation (based on heat flow) showed to be affected by particle size. As a result, a proper sample treatment which includes sieving of suspended particles is particularly required for this analytical approach.
NASA Astrophysics Data System (ADS)
Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J.
2017-03-01
The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold-Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis.
McElree, Brian; Carrasco, Marisa
2012-01-01
Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310
Improving patient care by making small sustainable changes: a cardiac telemetry unit's experience.
Braaten, Jane S; Bellhouse, Dorothy E
2007-01-01
With the introduction of each new drug, technology, and regulation, the processes of care become more complicated, creating an elaborate set of procedures connecting various hospital units and departments. Using methods of Adaptive Design and the Toyota Production System, a nursing unit redesigned work systems to achieve sustainable improvements in productivity, staff and patient satisfaction, and quality outcomes. The first hurdle of redesign was identifying problems, to which staff had become so accustomed with various work arounds that they had trouble seeing the process bottlenecks. Once the staff identified problems, they assumed they could solve the problem because they assumed they knew the causes. Utilizing root cause analysis, asking, "why, why, why," was essential to unearthing the true cause of a problem. Similarly, identifying solutions that were simple and low cost was an essential step in problem solving. Adopting new procedures and sustaining the commitment to identify and signal problems was a last and critical step toward realizing improvement, requiring a manager to function as "teacher/coach" rather than "fixer/firefighter".
Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L
2013-01-01
Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.
Detailed analysis of complex single molecule FRET data with the software MASH
NASA Astrophysics Data System (ADS)
Hadzic, Mélodie C. A. S.; Kowerko, Danny; Börner, Richard; Zelger-Paulus, Susann; Sigel, Roland K. O.
2016-04-01
The processing and analysis of surface-immobilized single molecule FRET (Förster resonance energy transfer) data follows systematic steps (e.g. single molecule localization, clearance of different sources of noise, selection of the conformational and kinetic model, etc.) that require a solid knowledge in optics, photophysics, signal processing and statistics. The present proceeding aims at standardizing and facilitating procedures for single molecule detection by guiding the reader through an optimization protocol for a particular experimental data set. Relevant features were determined from single molecule movies (SMM) imaging Cy3- and Cy5-labeled Sc.ai5γ group II intron molecules synthetically recreated, to test the performances of four different detection algorithms. Up to 120 different parameterizations per method were routinely evaluated to finally establish an optimum detection procedure. The present protocol is adaptable to any movie displaying surface-immobilized molecules, and can be easily reproduced with our home-written software MASH (multifunctional analysis software for heterogeneous data) and script routines (both available in the download section of www.chem.uzh.ch/rna).
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
Detection of wavelengths in the visible range using fiber optic sensors
NASA Astrophysics Data System (ADS)
Díaz, Leonardo; Morales, Yailteh; Mattos, Lorenzo; Torres, Cesar O.
2013-11-01
This paper shows the design and implementation of a fiber optic sensor for detecting and identifying wavelengths in the visible range. The system consists of a diffuse optical fiber, a conventional laser diode 650nm, 2.5mW of power, an ambient light sensor LX1972, a PIC 18F2550 and LCD screen for viewing. The principle used in the detection of the lambda is based on specular reflection and absorption. The optoelectronic device designed and built used the absorption and reflection properties of the material under study, having as active optical medium a bifurcated optical fiber, which is optically coupled to an ambient light sensor, which makes the conversion of light signals to electricas, procedure performed by a microcontroller, which acquires and processes the signal. To verify correct operation of the assembly were utilized the color cards of sewing thread and nail polish as samples for analysis. This optoelectronic device can be used in many applications such as quality control of industrial processes, classification of corks or bottle caps, color quality of textiles, sugar solutions, polymers and food among others.
Application of GPR Method for Detection of Loose Zones in Flood Levee
NASA Astrophysics Data System (ADS)
Gołębiowski, Tomisław; Małysa, Tomasz
2018-02-01
In the paper the results of non-invasive georadar (GPR) surveys carried out for detection of loose zones located in the flood levee was presented. Terrain measurements were performed on the Vistula river flood levee in the village of Wawrzeńczyce near Cracow. In the investigation site, during the flood in 2010, leakages of levee were observed, so detection of inner water filtration paths was an important matter taking into account the stability of the levee during the next flood. GPR surveys had reconnaissance character, so they were carried out with the use of short-offset reflection profiling (SORP) technique and radargrams were subjected to standard signal processing. The results of surveys allowed to outline main loose zone in the levee which were the reason of leakages in 2010. Additionally gravel interbeddings in sand were detected which had an important influence, due to higher porosity of such zones, to water filtration inside of the levee. In the paper three solutions which allow to increase quality and resolution of radargrams were presented, i.e. changeable-polarisation surveys, advanced signal processing and DHA procedure.
46 CFR 160.037-7 - Procedure for approval.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 6 2010-10-01 2010-10-01 false Procedure for approval. 160.037-7 Section 160.037-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-7 Procedure for...
46 CFR 160.037-7 - Procedure for approval.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 6 2011-10-01 2011-10-01 false Procedure for approval. 160.037-7 Section 160.037-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-7 Procedure for...
46 CFR 160.037-7 - Procedure for approval.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 6 2013-10-01 2013-10-01 false Procedure for approval. 160.037-7 Section 160.037-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-7 Procedure for...
46 CFR 160.037-7 - Procedure for approval.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 6 2012-10-01 2012-10-01 false Procedure for approval. 160.037-7 Section 160.037-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-7 Procedure for...
46 CFR 160.037-7 - Procedure for approval.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 6 2014-10-01 2014-10-01 false Procedure for approval. 160.037-7 Section 160.037-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-7 Procedure for...
46 CFR 153.953 - Signals during cargo transfer.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Signals during cargo transfer. 153.953 Section 153.953 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SHIPS... Procedures § 153.953 Signals during cargo transfer. The master shall ensure that: (a) The tankship displays a...
46 CFR 153.953 - Signals during cargo transfer.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Signals during cargo transfer. 153.953 Section 153.953 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SHIPS... Procedures § 153.953 Signals during cargo transfer. The master shall ensure that: (a) The tankship displays a...
46 CFR 153.953 - Signals during cargo transfer.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Signals during cargo transfer. 153.953 Section 153.953 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SHIPS... Procedures § 153.953 Signals during cargo transfer. The master shall ensure that: (a) The tankship displays a...
46 CFR 153.953 - Signals during cargo transfer.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Signals during cargo transfer. 153.953 Section 153.953 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SHIPS... Procedures § 153.953 Signals during cargo transfer. The master shall ensure that: (a) The tankship displays a...
NASA Astrophysics Data System (ADS)
Mutiibwa, D.; Irmak, S.
2011-12-01
The majority of recent climate change studies have largely focused on detection and attribution of anthropogenic forcings of greenhouse gases, aerosols, stratospheric and tropospheric ozone. However, there is growing evidence that land cover/land use (LULC) change can significantly impact atmospheric processes from local to regional weather and climate variability. Human activities such as conversion of natural ecosystem to croplands and urban-centers, deforestation and afforestation impact biophysical properties of the land surfaces including albedo, energy balance, moisture-holding capacity of soil, and surface roughness. Alterations in these properties affect the heat and moisture exchanges between the land surface and atmospheric boundary layer, and ultimately impact the climate system. The challenge is to demonstrate that LULC changes produce a signal that can be discerned from natural climate noise. In this study, we attempt to detect the signature of anthropogenic forcing of LULC change on climate on regional scale. The signal projector investigated for detecting the signature of LULC changes on regional climate of the High Plains of the USA is the Normalized Difference Vegetation Index (NDVI). NDVI is an indicator that captures short and long-term geographical distribution of vegetation surfaces. The study develops an enhanced signal processing procedure to maximize the signal to noise ratio by introducing a pre-filtering technique of ARMA processes on the investigated climate and signal variables, before applying the optimal fingerprinting technique to detect the signals of LULC changes on observed climate, temperature, in the High Plains. The intent is to filter out as much noise as possible while still retaining the essential features of the signal by making use of the known characteristics of the noise and the anticipated signal. The study discusses the approach of identifying and suppressing the autocorrelation in optimal fingerprint analysis by applying linear transformation of ARMA processes to the analysis variables. With the assumption that natural climate variability is a near stationary process, the pre-filters are developed to generate stationary residuals. The High Plains region although impacted by droughts over the last three decades has had an increase in agricultural lands, both irrigated and non-irrigated. The study shows that for the most part of the High Plains region there is significant influence of evaporative cooling on regional climate during the summer months. As the vegetation coverage increases coupled with increased in irrigation application, the regional daytime surface energy in summer is increasingly redistributed into latent heat flux which increases the effect of evaporative cooling on summer temperatures. We included the anthropogenic forcing of CO2 on regional climate with the main purpose of surpassing the radiative heating effect of greenhouse gases from natural climate noise, to enhance the LULC signal-to-noise ratio. The warming signal due to greenhouse gas forcing is observed to be weakest in the central part of the High Plains. The results showed that the CO2 signal in the region was weak or is being surpassed by the evaporative cooling effect.
Methodology for the AutoRegressive Planet Search (ARPS) Project
NASA Astrophysics Data System (ADS)
Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration
2018-01-01
The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.
CVD diamond substrate for microelectronics. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burden, J.; Gat, R.
1996-11-01
Chemical Vapor Deposition (CVD) of diamond films has evolved dramatically in recent years, and commercial opportunities for diamond substrates in thermal management applications are promising. The objective of this technology transfer initiative (TTI) is for Applied Science and Technology, Inc. (ASTEX) and AlliedSignal Federal Manufacturing and Technologies (FM&T) to jointly develop and document the manufacturing processes and procedures required for the fabrication of multichip module circuits using CVD diamond substrates, with the major emphasis of the project concentrating on lapping/polishing prior to metallization. ASTEX would provide diamond films for the study, and FM&T would use its experience in lapping, polishing,more » and substrate metallization to perform secondary processing on the parts. The primary goal of the project was to establish manufacturing processes that lower the manufacturing cost sufficiently to enable broad commercialization of the technology.« less
Development of a microcomputer-based magnetic heading sensor
NASA Technical Reports Server (NTRS)
Garner, H. D.
1987-01-01
This paper explores the development of a flux-gate magnetic heading reference using a single-chip microcomputer to process heading information and to present it to the pilot in appropriate form. This instrument is intended to replace the conventional combination of mechanical compass and directional gyroscope currently in use in general aviation aircraft, at appreciable savings in cost and reduction in maintenance. Design of the sensing element, the signal processing electronics, and the computer algorithms which calculate the magnetic heading of the aircraft from the magnetometer data have been integrated in such a way as to minimize hardware requirements and simplify calibration procedures. Damping and deviation errors are avoided by the inherent design of the device, and a technique for compensating for northerly-turning-error is described.
Sort entropy-based for the analysis of EEG during anesthesia
NASA Astrophysics Data System (ADS)
Ma, Liang; Huang, Wei-Zhi
2010-08-01
The monitoring of anesthetic depth is an absolutely necessary procedure in the process of surgical operation. To judge and control the depth of anesthesia has become a clinical issue which should be resolved urgently. EEG collected wiil be processed by sort entrop in this paper. Signal response of the surface of the cerebral cortex is determined for different stages of patients in the course of anesthesia. EEG is simulated and analyzed through the fast algorithm of sort entropy. The results show that discipline of phasic changes for EEG is very detected accurately,and it has better noise immunity in detecting the EEG anaesthetized than approximate entropy. In conclusion,the computing of Sort entropy algorithm requires shorter time. It has high efficiency and strong anti-interference.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
High-Throughput RNA Interference Screening: Tricks of the Trade
Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile
2016-01-01
The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willse, Alan R.; Belcher, Ann; Preti, George
2005-04-15
Gas chromatography (GC), combined with mass spectrometry (MS) detection, is a powerful analytical technique that can be used to separate, quantify, and identify volatile compounds in complex mixtures. This paper examines the application of GC-MS in a comparative experiment to identify volatiles that differ in concentration between two groups. A complex mixture might comprise several hundred or even thousands of volatile compounds. Because their number and location in a chromatogram generally are unknown, and because components overlap in populous chromatograms, the statistical problems offer significant challenges beyond traditional two-group screening procedures. We describe a statistical procedure to compare two-dimensional GC-MSmore » profiles between groups, which entails (1) signal processing: baseline correction and peak detection in single ion chromatograms; (2) aligning chromatograms in time; (3) normalizing differences in overall signal intensities; and (4) detecting chromatographic regions that differ between groups. Compared to existing approaches, the proposed method is robust to errors made at earlier stages of analysis, such as missed peaks or slightly misaligned chromatograms. To illustrate the method, we identify differences in GC-MS chromatograms of ether-extracted urine collected from two nearly identical inbred groups of mice, to investigate the relationship between odor and genetics of the major histocompatibility complex.« less
Sprenger, Richard R; Speijer, Dave; Back, Jaap Willem; De Koster, Chris G; Pannekoek, Hans; Horrevoets, Anton J G
2004-01-01
The human endothelial cell plasma membrane harbors two subdomains of similar lipid composition, caveolae and rafts, both crucially involved in various essential cellular processes like transcytosis, signal transduction and cholesterol homeostasis. Caveolin-enriched membranes, isolated by either cationic silica or buoyant density methods, were explored by comparing large series of two-dimensional (2-D) maps and subsequent identification of over 100 protein spots by matrix-assisted laser desorption/ionization (MALDI) peptide mass fingerprinting. Improved representation and identification of membrane proteins and valuable information on various post-translational modifications was achieved by the presented optimized procedures for solubilization, destaining and database searching/computing. Whereas the cationic silica purification yielded predominantly known endoplasmic reticulum residents, the cold-detergent method yielded a large number of known caveolae residents, including caveolin-1. Thus, a large part of this subproteome was established, including known (trans-)membrane, signal transduction and glycosyl phosphatidylinositol (GPI)-anchored proteins. Several predicted proteins from the human genome were isolated for the first time from biological samples, including SGRP58, SLP-2, C8ORF2, and XRP-2. These findings and various optimized procedures can serve as a reference to study the differential composition of endothelial cell caveolae and rafts, known to be involved in pathologies like cancer and cardiovascular disease.
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.
Balancing Cognitive Demands: Control Adjustments in the Stop-Signal Paradigm
ERIC Educational Resources Information Center
Bissett, Patrick G.; Logan, Gordon D.
2011-01-01
Cognitive control enables flexible interaction with a dynamic environment. In 2 experiments, the authors investigated control adjustments in the stop-signal paradigm, a procedure that requires balancing speed (going) and caution (stopping) in a dual-task environment. Focusing on the slowing of go reaction times after stop signals, the authors…
A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification
NASA Astrophysics Data System (ADS)
Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.
MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.
Wang, Shau-Chun; Huang, Chih-Min; Chiang, Shu-Min
2007-08-17
This paper reports a simple chemometric technique to alter the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS-MS) chromatogram between two consecutive matched filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. This technique is to multiply one match-filtered LC-MS-MS chromatogram with another artificial chromatogram added with thermal noises prior to the second matched filter. Because matched filter cannot eliminate low-frequency components inherent in the flicker noises of spike-like sharp peaks randomly riding on LC-MS-MS chromatograms, efficient peak S/N ratio improvement cannot be accomplished using one-step or consecutive matched filter procedures to process LC-MS-MS chromatograms. In contrast, when the match-filtered LC-MS-MS chromatogram is conditioned with the multiplication alteration prior to the second matched filter, much better efficient ratio improvement is achieved. The noise frequency spectrum of match-filtered chromatogram, which originally contains only low-frequency components, is altered to span a boarder range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward higher frequency regime, the second matched filter, working as a low-pass filter, is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS-MS chromatograms containing random spike-like peaks, of which peak S/N ratio improvement is less than four times with two consecutive matched filters typically, are remedied to accomplish much better ratio enhancement approximately 16-folds when the noise frequency spectrum is modified between two matched filters.
NASA Astrophysics Data System (ADS)
Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang
2010-12-01
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.
Vertical velocity variance in the mixed layer from radar wind profilers
Eng, K.; Coulter, R.L.; Brutsaert, W.
2003-01-01
Vertical velocity variance data were derived from remotely sensed mixed layer turbulence measurements at the Atmospheric Boundary Layer Experiments (ABLE) facility in Butler County, Kansas. These measurements and associated data were provided by a collection of instruments that included two 915 MHz wind profilers, two radio acoustic sounding systems, and two eddy correlation devices. The data from these devices were available through the Atmospheric Boundary Layer Experiment (ABLE) database operated by Argonne National Laboratory. A signal processing procedure outlined by Angevine et al. was adapted and further built upon to derive vertical velocity variance, w_pm???2, from 915 MHz wind profiler measurements in the mixed layer. The proposed procedure consisted of the application of a height-dependent signal-to-noise ratio (SNR) filter, removal of outliers plus and minus two standard deviations about the mean on the spectral width squared, and removal of the effects of beam broadening and vertical shearing of horizontal winds. The scatter associated with w_pm???2 was mainly affected by the choice of SNR filter cutoff values. Several different sets of cutoff values were considered, and the optimal one was selected which reduced the overall scatter on w_pm???2 and yet retained a sufficient number of data points to average. A similarity relationship of w_pm???2 versus height was established for the mixed layer on the basis of the available data. A strong link between the SNR and growth/decay phases of turbulence was identified. Thus, the mid to late afternoon hours, when strong surface heating occurred, were observed to produce the highest quality signals.
The DCU: the detector control unit for SPICA-SAFARI
NASA Astrophysics Data System (ADS)
Clénet, Antoine; Ravera, Laurent; Bertrand, Bernard; den Hartog, Roland H.; Jackson, Brian D.; van Leeuven, Bert-Joost; van Loon, Dennis; Parot, Yann; Pointecouteau, Etienne; Sournac, Anthony
2014-08-01
IRAP is developing the warm electronic, so called Detector Control Unit" (DCU), in charge of the readout of the SPICA-SAFARI's TES type detectors. The architecture of the electronics used to readout the 3 500 sensors of the 3 focal plane arrays is based on the frequency domain multiplexing technique (FDM). In each of the 24 detection channels the data of up to 160 pixels are multiplexed in frequency domain between 1 and 3:3 MHz. The DCU provides the AC signals to voltage-bias the detectors; it demodulates the detectors data which are readout in the cold by a SQUID; and it computes a feedback signal for the SQUID to linearize the detection chain in order to optimize its dynamic range. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several µs) and with fast signals (i.e. frequency carriers at 3:3 MHz). This digital signal processing is complex and has to be done at the same time for the 3 500 pixels. It thus requires an optimisation of the power consumption. We took the advantage of the relatively reduced science signal bandwidth (i.e. 20 - 40 Hz) to decouple the signal sampling frequency (10 MHz) and the data processing rate. Thanks to this method we managed to reduce the total number of operations per second and thus the power consumption of the digital processing circuit by a factor of 10. Moreover we used time multiplexing techniques to share the resources of the circuit (e.g. a single BBFB module processes 32 pixels). The current version of the firmware is under validation in a Xilinx Virtex 5 FPGA, the final version will be developed in a space qualified digital ASIC. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed the operation of the detection and readout chains requires to properly define more than 17 500 parameters (about 5 parameters per pixel). Thus it is mandatory to work out an automatic procedure to set up these optimal values. We defined a fast algorithm which characterizes the phase correction to be applied by the BBFB firmware and the pixel resonance frequencies. We also defined a technique to define the AC-carrier initial phases in such a way that the amplitude of their sum is minimized (for a better use of the DAC dynamic range).
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems.
Liu, Hung-Huan; Liu, Chun
2017-12-21
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms.
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems
Liu, Chun
2017-01-01
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms. PMID:29267234
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal instantaneous frequencies.
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos
2014-01-01
Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033
Muscle activity characterization by laser Doppler Myography
NASA Astrophysics Data System (ADS)
Scalise, Lorenzo; Casaccia, Sara; Marchionni, Paolo; Ercoli, Ilaria; Primo Tomasini, Enrico
2013-09-01
Electromiography (EMG) is the gold-standard technique used for the evaluation of muscle activity. This technique is used in biomechanics, sport medicine, neurology and rehabilitation therapy and it provides the electrical activity produced by skeletal muscles. Among the parameters measured with EMG, two very important quantities are: signal amplitude and duration of muscle contraction, muscle fatigue and maximum muscle power. Recently, a new measurement procedure, named Laser Doppler Myography (LDMi), for the non contact assessment of muscle activity has been proposed to measure the vibro-mechanical behaviour of the muscle. The aim of this study is to present the LDMi technique and to evaluate its capacity to measure some characteristic features proper of the muscle. In this paper LDMi is compared with standard superficial EMG (sEMG) requiring the application of sensors on the skin of each patient. sEMG and LDMi signals have been simultaneously acquired and processed to test correlations. Three parameters has been analyzed to compare these techniques: Muscle activation timing, signal amplitude and muscle fatigue. LDMi appears to be a reliable and promising measurement technique allowing the measurements without contact with the patient skin.
The upgrade of the Thomson scattering system for measurement on the C-2/C-2U devices.
Zhai, K; Schindler, T; Kinley, J; Deng, B; Thompson, M C
2016-11-01
The C-2/C-2U Thomson scattering system has been substantially upgraded during the latter phase of C-2/C-2U program. A Rayleigh channel has been added to each of the three polychromators of the C-2/C-2U Thomson scattering system. Onsite spectral calibration has been applied to avoid the issue of different channel responses at different spots on the photomultiplier tube surface. With the added Rayleigh channel, the absolute intensity response of the system is calibrated with Rayleigh scattering in argon gas from 0.1 to 4 Torr, where the Rayleigh scattering signal is comparable to the Thomson scattering signal at electron densities from 1 × 10 13 to 4 × 10 14 cm -3 . A new signal processing algorithm, using a maximum likelihood method and including detailed analysis of different noise contributions within the system, has been developed to obtain electron temperature and density profiles. The system setup, spectral and intensity calibration procedure and its outcome, data analysis, and the results of electron temperature/density profile measurements will be presented.
Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.
Sonification of optical coherence tomography data and images
Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.
2010-01-01
Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846
Chemometric study of Maya Blue from the voltammetry of microparticles approach.
Doménech, Antonio; Doménech-Carbó, María Teresa; de Agredos Pascual, María Luisa Vazquez
2007-04-01
The use of the voltammetry of microparticles at paraffin-impregnated graphite electrodes allows for the characterization of different types of Maya Blue (MB) used in wall paintings from different archaeological sites of Campeche and YucatAn (Mexico). Using voltammetric signals for electron-transfer processes involving palygorskite-associated indigo and quinone functionalities generated by scratching the graphite surface, voltammograms provide information on the composition and texture of MB samples. Application of hierarchical cluster analysis and other chemometric methods allows us to characterize samples from different archaeological sites and to distinguish between samples proceeding from different chronological periods. Comparison between microscopic, spectroscopic, and electrochemical examination of genuine MB samples and synthetic specimens indicated that the preparation procedure of the pigment evolved in time via successive steps anticipating modern synthetic procedures, namely, hybrid organic-inorganic synthesis, temperature control of chemical reactivity, and template-like synthesis.
Indirect measures as a signal for evaluative change.
Perugini, Marco; Richetin, Juliette; Zogmaister, Cristina
2014-01-01
Implicit and explicit attitudes can be changed by using evaluative learning procedures. In this contribution we investigated an asymmetric effect of order of administration of indirect and direct measures on the detection of evaluative change: A change in explicit attitudes is more likely detected if they are measured after implicit attitudes, whereas these latter change regardless of the order. This effect was demonstrated in two studies (n=270; n=138) using the self-referencing task whereas it was not found in a third study (n=151) that used a supraliminal sequential evaluative conditioning paradigm. In all studies evaluative change was present only for contingency aware participants. We discuss a potential explanation underlying the order of measure effect entailing that, in some circumstances, an indirect measure is not only a measure but also a signal that can be detected through self-perception processes and further elaborated at the propositional level.
Practical considerations of image analysis and quantification of signal transduction IHC staining.
Grunkin, Michael; Raundahl, Jakob; Foged, Niels T
2011-01-01
The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.
Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.
Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly
2015-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.
The muon tomography Diaphane project : recent upgrades and measurements
NASA Astrophysics Data System (ADS)
Jourde, Kevin; Gibert, Dominique; Marteau, Jacques; de Bremond d'Ars, Jean; Gardien, Serge; Girerd, Claude; Ianigro, Jean-Christophe; Carbone, Daniele
2014-05-01
Muon tomography measures the flux of cosmic muons crossing geological bodies to determine their density. Large density heterogeneities were detected on la Soufrière de Guadeloupe revealing its very active phreatic system. These measurements were made possible thanks to electronic and signal processing developments. Indeed the telescopes used to perform these measurements are exposed to noise fluxes with high intensities relative to the tiny flux of interest. A high precision clock permitted to measure upward-going particles coming from the rear of the telescope that used to mix with the volcano signal. Also the particles energy deposit inside the telescope shows that other particles than muons take part to the noise. We present data acquired on la Soufrière, mount Etna in Italy, and in the Mont Terri tunnel in Switzerland. Biases produced on density muon radiographies are quantified and correction procedures are applied.
On the transmission of partial information: inferences from movement-related brain potentials
NASA Technical Reports Server (NTRS)
Osman, A.; Bashore, T. R.; Coles, M. G.; Donchin, E.; Meyer, D. E.
1992-01-01
Results are reported from a new paradigm that uses movement-related brain potentials to detect response preparation based on partial information. The paradigm uses a hybrid choice-reaction go/nogo procedure in which decisions about response hand and whether to respond are based on separate stimulus attributes. A lateral asymmetry in the movement-related brain potential was found on nogo trials without overt movement. The direction of this asymmetry depended primarily on the signaled response hand rather than on properties of the stimulus. When the asymmetry first appeared was influenced by the time required to select the signaled hand, and when it began to differ on go and nogo trials was influenced by the time to decide whether to respond. These findings indicate that both stimulus attributes were processed in parallel and that the asymmetry reflected preparation of the response hand that began before the go/nogo decision was completed.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
Fluorescence lifetime assays: current advances and applications in drug discovery.
Pritz, Stephan; Doering, Klaus; Woelcke, Julian; Hassiepen, Ulrich
2011-06-01
Fluorescence lifetime assays complement the portfolio of established assay formats available in drug discovery, particularly with the recent advances in microplate readers and the commercial availability of novel fluorescent labels. Fluorescence lifetime assists in lowering complexity of compound screening assays, affording a modular, toolbox-like approach to assay development and yielding robust homogeneous assays. To date, materials and procedures have been reported for biochemical assays on proteases, as well as on protein kinases and phosphatases. This article gives an overview of two assay families, distinguished by the origin of the fluorescence signal modulation. The pharmaceutical industry demands techniques with a robust, integrated compound profiling process and short turnaround times. Fluorescence lifetime assays have already helped the drug discovery field, in this sense, by enhancing productivity during the hit-to-lead and lead optimization phases. Future work will focus on covering other biochemical molecular modifications by investigating the detailed photo-physical mechanisms underlying the fluorescence signal.
User expectations for multibeam echo sounders backscatter strength data-looking back into the future
NASA Astrophysics Data System (ADS)
Lucieer, Vanessa; Roche, Marc; Degrendele, Koen; Malik, Mashkoor; Dolan, Margaret; Lamarche, Geoffroy
2018-06-01
With the ability of multibeam echo sounders (MBES) to measure backscatter strength (BS) as a function of true angle of insonification across the seafloor, came a new recognition of the potential of backscatter measurements to remotely characterize the properties of the seafloor. Advances in transducer design, digital electronics, signal processing capabilities, navigation, and graphic display devices, have improved the resolution and particularly the dynamic range available to sonar and processing software manufacturers. Alongside these improvements the expectations of what the data can deliver has also grown. In this paper, we identify these user-expectations and explore how MBES backscatter is utilized by different communities involved in marine seabed research at present, and the aspirations that these communities have for the data in the future. The results presented here are based on a user survey conducted by the GeoHab (Marine Geological and Biological Habitat Mapping) association. This paper summarises the different processing procedures employed to extract useful information from MBES backscatter data and the various intentions for which the user community collect the data. We show how a range of backscatter output products are generated from the different processing procedures, and how these results are taken up by different scientific disciplines, and also identify common constraints in handling MBES BS data. Finally, we outline our expectations for the future of this unique and important data source for seafloor mapping and characterisation.
NASA Astrophysics Data System (ADS)
Various papers on AE from composite materials are presented. Among the individual topics addressed are: acoustic analysis of tranverse lamina cracking in CFRP laminates under tensile loading, characterization of fiber failure in graphite-epoxy (G/E) composites, application of AE in the study of microfissure damage to composite used in the aeronautic and space industries, interfacial shear properties and AE behavior of model aluminum and titanium matrix composites, amplitude distribution modelling and ultimate strength prediction of ASTM D-3039 G/E tensile specimens, AE prefailure warning system for composite structural tests, characterization of failure mechanisms in G/E tensile tests specimens using AE data, development of a standard testing procedure to yield an AE vs. strain curve, benchmark exercise on AE measurements from carbon fiber-epoxy composites. Also discussed are: interpretation of optically detected AE signals, acoustic emission monitoring of fracture process of SiC/Al composites under cyclic loading, application of pattern recognition techniques to acousto-ultrasonic testing of Kevlar composite panels, AE for high temperature monitoring of processing of carbon/carbon composite, monitoring the resistance welding of thermoplastic composites through AE, plate wave AE composite materials, determination of the elastic properties of composite materials using simulated AE signals, AE source location in thin plates using cross-correlation, propagation of flexural mode AE signals in Gr/Ep composite plates.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-05-10
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-01-01
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration. PMID:28489056
Virtualized MME Design for IoT Support in 5G Systems.
Andres-Maldonado, Pilar; Ameigeiras, Pablo; Prados-Garzon, Jonathan; Ramos-Munoz, Juan Jose; Lopez-Soler, Juan Manuel
2016-08-22
Cellular systems are recently being considered an option to provide support to the Internet of Things (IoT). To enable this support, the 3rd Generation Partnership Project (3GPP) has introduced new procedures specifically targeted for cellular IoT. With one of these procedures, the transmissions of small and infrequent data packets from/to the devices are encapsulated in signaling messages and sent through the control plane. However, these transmissions from/to a massive number of devices may imply a major increase of the processing load on the control plane entities of the network and in particular on the Mobility Management Entity (MME). In this paper, we propose two designs of an MME based on Network Function Virtualization (NFV) that aim at facilitating the IoT support. The first proposed design partially separates the processing resources dedicated to each traffic class. The second design includes traffic shaping to control the traffic of each class. We consider three classes: Mobile Broadband (MBB), low latency Machine to Machine communications (lM2M) and delay-tolerant M2M communications. Our proposals enable reducing the processing resources and, therefore, the cost. Additionally, results show that the proposed designs lessen the impact between classes, so they ease the compliance of the delay requirements of MBB and lM2M communications.
Perceptual-center modeling is affected by including acoustic rate-of-change modulations.
Harsin, C A
1997-02-01
This study investigated the acoustic correlates of perceptual centers (p-centers) in CV and VC syllables and developed an acoustic p-center model. In Part 1, listeners located syllables' p-centers by a method-of-adjustment procedure. The CV syllables contained the consonants /s/,/r/,/n/,/t/,/d/,/k/, and /g/; the VCs, the consonants /s/,/r/, and /n/. The vowel in all syllables was /a/. The results of this experiment replicated and extended previous findings regarding the effects of phonetic variation on p-centers. In Part 2, a digital signal processing procedure was used to acoustically model p-center perception. Each stimulus was passed through a six-band digital filter, and the outputs were processed to derive low-frequency modulation components. These components were weighted according to a perceived modulation magnitude function and recombined to create six psychoacoustic envelopes containing modulation energies from 3 to 47 Hz. In this analysis, p-centers were found to be highly correlated with the time-weighted function of the rate-of-change in the psychoacoustic envelopes, multiplied by the psychoacoustic envelope magnitude increment. The results were interpreted as suggesting (1) the probable role of low-frequency energy modulations in p-center perception, and (2) the presence of perceptual processes that integrate multiple articulatory events into a single syllabic event.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Rabasa, Cristina; Delgado-Morales, Raúl; Muñoz-Abellán, Cristina; Nadal, Roser; Armario, Antonio
2011-02-02
Repeated exposure to the same stressor very often results in a reduction of some prototypical stress responses, namely those related to the hypothalamic-pituitary-adrenal (HPA) and sympatho-medullo-adrenal (SMA) axes. This reduced response to repeated exposure to the same (homotypic) stressor (adaptation) is usually considered as a habituation-like process, and therefore, a non-associative type of learning. However, there is some evidence that contextual cues and therefore associative processes could contribute to adaptation. In the present study we demonstrated in two experiments using adult male rats that repeated daily exposure to restraint (REST) or immobilization on boards (IMO) reduced the HPA (plasma levels of ACTH and corticosterone) and glucose responses to the homotypic stressor and such reduced responses remained intact when all putative cues associated to the procedure (experimenter, way of transporting to the stress room, stress boxes, stress room and colour of the restrainer in the case of REST) were modified on the next day. Therefore, the present results do not favour the view that adaptation after repeated exposure to a stressor may involve associative processes related to signals predicting the imminence of the stressors, but more studies are needed on this issue. Copyright © 2010 Elsevier B.V. All rights reserved.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.
2009-12-01
We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.
Wang, Edward H; Sampson, Matthew J
2016-09-01
When performing CT-guided procedures or angiographic procedures, radiologists performing procedures need to communicate with radiographers at a workstation behind radiation shielding glass. As shielding renders verbal communication impossible, we have developed a set of standardised hand signals for use at our department to help us achieve clear and efficient communication between radiologists and radiographers while performing CT-guided or angiographic procedures.
NASA Astrophysics Data System (ADS)
Huang, Hong-bin; Liu, Wei-ping; Chen, Shun-er; Zheng, Liming
2005-02-01
A new type of CATV network management system developed by universal MCU, which supports SNMP, is proposed in this paper. From the point of view in both hardware and software, the function and method of every modules inside the system, which include communications in the physical layer, protocol process, data process, and etc, are analyzed. In our design, the management system takes IP MAN as data transmission channel and every controlled object in the management structure has a SNMP agent. In the SNMP agent developed, there are four function modules, including physical layer communication module, protocol process module, internal data process module and MIB management module. In the paper, the structure and function of every module are designed and demonstrated while the related hardware circuit, software flow as well as the experimental results are tested. Furthermore, by introducing RTOS into the software programming, the universal MCU procedure can conducts such multi-thread management as fast Ethernet controller driving, TCP/IP process, serial port signal monitoring and so on, which greatly improves efficiency of CPU.
1991-06-01
Validation And Reconstruction -~ Phase 1: System Architecture Study i ".- Contract NAS 3 -25883 I - _ CR-187124 -4 Phase I Final Report,, " , I Prepared for...131 NAS 3 -25883 1.0 INTRODUCTION 1 2.0 EXECUTIVE SUMMARY 2 3.0 TECHNICAL DISCUSSION 8 3.1 Review of SSME Test Data and Validation Procedure 8 3.1.1...NAS 3 -25883 FIGURES FigureNo. e 1 Elements The Sensor Data Validation and Signal Reconstuction System 7 3 Current NASA MSFC Data Review Process 12 4
NASA Astrophysics Data System (ADS)
Ushakov, V. N.
1995-10-01
A video-frequency acousto-optical correlator with spatial integration, which widens the functional capabilities of correlation-type acousto-optical processors, is described. The correlator is based on a two-dimensional reference transparency and it can filter arbitrary video signals of spectral width limited by the pass band of an acousto-optical modulator. The calculated pulse characteristic is governed by the structure of the reference transparency. A procedure for the synthesis of this transparency is considered and experimental results are reported.
Dental technology over 150 years: evolution and revolution.
Feuerstein, Paul
2014-01-01
A patient entering a dental office is often greeted and then checked in through the practice management system's digital appointment book. The provider is notified by an electronic signal that is visual, audible, or both. The patient is led to the treatment area and sits in a dental chair which is adjusted to the individual's size and position for the treatment, and the light is positioned. Sometimes a radiograph is taken, local anesthetic is delivered, and a handpiece--air turbine or electric--is used for the procedure. How different is this process today from a dentist treating a patient in 1864?
2015-12-31
image from NURP annual report. in X The ray -cone code simulates the CAS signal received after being reflected form two different targets, and...Cm where m, m, ... , 1fn are X ’s parents, and nodes C1, C1, ... , C,, are X ’s children. Image based on (Duda, Hart, & Stork, 2001). The first...Sorenson, 1970). Using the reference (Welch & Bishop, 2006), the procedure for estimating the real state x , of a discrete-time controlled process , will
NASA Astrophysics Data System (ADS)
Dindar, Cigdem; Kiran, Erdogan
2002-10-01
We present a new sensor configuration and data reduction process to improve the accuracy and reliability of determining the terminal velocity of a falling sinker in falling body type viscometers. This procedure is based on the use of multiple linear variable differential transformer sensors and precise mapping of the sensor signal and position along with the time of fall which is then converted to distance versus fall time along the complete fall path. The method and its use in determination of high-pressure viscosity of n-pentane and carbon dioxide are described.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
Development of traffic control and queue management procedures for oversaturated arterials
DOT National Transportation Integrated Search
1997-01-01
The formulation and solution of a new algorithm for queue management and coordination of traffic signals along oversaturated arterials are presented. Existing traffic-control and signal-coordination algorithms deal only with undersaturated steady-sta...
Conductive interference in rapid transit signaling systems. volume 2. suggested test procedures
DOT National Transportation Integrated Search
1987-05-31
Methods for detecting and quantifying the levels of conductive electromagnetic interference produced by solid state rapid transit propulsion equipment and for determining the susceptibility of signaling systems to these emissions are presented. These...
Estimating the signal-to-noise ratio of AVIRIS data
NASA Technical Reports Server (NTRS)
Curran, Paul J.; Dungan, Jennifer L.
1988-01-01
To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.
Recollection can be Weak and Familiarity can be Strong
Ingram, Katherine M.; Mickes, Laura; Wixted, John T.
2012-01-01
The Remember/Know procedure is widely used to investigate recollection and familiarity in recognition memory, but almost all of the results obtained using that procedure can be readily accommodated by a unidimensional model based on signal-detection theory. The unidimensional model holds that Remember judgments reflect strong memories (associated with high confidence, high accuracy, and fast reaction times), whereas Know judgments reflect weaker memories (associated with lower confidence, lower accuracy, and slower reaction times). Although this is invariably true on average, a new two-dimensional account (the Continuous Dual-Process model) suggests that Remember judgments made with low confidence should be associated with lower old/new accuracy, but higher source accuracy, than Know judgments made with high confidence. We tested this prediction – and found evidence to support it – using a modified Remember/Know procedure in which participants were first asked to indicate a degree of recollection-based or familiarity-based confidence for each word presented on a recognition test and were then asked to recollect the color (red or blue) and screen location (top or bottom) associated with the word at study. For familiarity-based decisions, old/new accuracy increased with old/new confidence, but source accuracy did not (suggesting that stronger old/new memory was supported by higher degrees of familiarity). For recollection-based decisions, both old/new accuracy and source accuracy increased with old/new confidence (suggesting that stronger old/new memory was supported by higher degrees of recollection). These findings suggest that recollection and familiarity are continuous processes and that participants can indicate which process mainly contributed to their recognition decisions. PMID:21967320
Real-Time Prediction of Temperature Elevation During Robotic Bone Drilling Using the Torque Signal.
Feldmann, Arne; Gavaghan, Kate; Stebinger, Manuel; Williamson, Tom; Weber, Stefan; Zysset, Philippe
2017-09-01
Bone drilling is a surgical procedure commonly required in many surgical fields, particularly orthopedics, dentistry and head and neck surgeries. While the long-term effects of thermal bone necrosis are unknown, the thermal damage to nerves in spinal or otolaryngological surgeries might lead to partial paralysis. Previous models to predict the temperature elevation have been suggested, but were not validated or have the disadvantages of computation time and complexity which does not allow real time predictions. Within this study, an analytical temperature prediction model is proposed which uses the torque signal of the drilling process to model the heat production of the drill bit. A simple Green's disk source function is used to solve the three dimensional heat equation along the drilling axis. Additionally, an extensive experimental study was carried out to validate the model. A custom CNC-setup with a load cell and a thermal camera was used to measure the axial drilling torque and force as well as temperature elevations. Bones with different sets of bone volume fraction were drilled with two drill bits ([Formula: see text]1.8 mm and [Formula: see text]2.5 mm) and repeated eight times. The model was calibrated with 5 of 40 measurements and successfully validated with the rest of the data ([Formula: see text]C). It was also found that the temperature elevation can be predicted using only the torque signal of the drilling process. In the future, the model could be used to monitor and control the drilling process of surgeries close to vulnerable structures.
Fabrication et caracterisation d'hybrides optiques tout-fibre 120° et 90° achromatiques
NASA Astrophysics Data System (ADS)
Khettal, Elyes
This thesis presents the fabrication and characterization of optical hybrids as all-fiber 3 x 3 and 4 x 4 couplers. A hybrid does two things; it splits power equally and acts as an interferometer. As an interferometer, it allows to accurately measure the amplitude and phase of an optical signal with respect to a reference signal. Like in a radio receiver, a local oscillator is used to interfere with the incoming signal to produce a beating signal. The complex amplitude is then rebuilt using the output signals of the hybrid. This is known as coherent detection. Since this thesis is a follow-up to a previous project, the main goal is to improve the fabrication process of the couplers in order to give it a certain level of repeatability and reproducibility. The 3 x 3 coupler will be used as a platform of development since the fabrication process is pretty much the same for both couplers. The secondary objective is to validate the theoretical concepts of a broadband hybrid in the form of an asymmetric 4 x 4 coupler. The theory explaining the functioning these couplers is presented and the experimental parameters necessary to their fabrication are derived. The fabrication method used is that of fusion-tapering that has been used for many years to produce 2 x 2 couplers and fiber tapers. The procedure consists of holding fibers together tangentially and fusing them into a monolithic structure with the help a propane flame. The structure is then tapered by linear motorized stages and the procedure is stopped when the desired optical response is achieved. The component is then securely packaged in a hollow metal tube. The critically step of the procedure is holding the fibers together in a desired pattern - a triangle for 3 x 3 couplers and a square or a diamond for 4 x 4 couplers. New methods to make this step more repeatable are highlighted. Several cross-sections of fused couplers are shown and the level of success of the new methods is discussed. The characterization methods in transmission and phase are described and the experimental results are presented. The transmission spectra of the 3 x 3 coupler that was built are presented. Its performances in phase at several wavelengths of the C band (1530-1565 nm) are measured and analyzed. The built hybrid has low loss (<0,8 dB) and shows a phase drift lower than 5° on about 40 nm. Its ability to measure phase accurately is demonstrated by demodulating a digital QPSK signal. In order to validate the theory of the broadband 4 x 4 hybrid, a new fusion-tapering approach is developed and tested. It is used to make biconical 2 x 2 couplers that allow to test the adiabatic transfer of supermodes, a core concept of broadband hybrids. This however does not yield the expected result and an alternative approach is proposed and tested. This new approach gives more encouraging results, confirming the hypothesis and forecasting a viable way to build broadband hybrids. The main goal of the project cannot be considered as achieved since the procedure to hold the fibers together does not guarantee that they stay in the desired pattern. Since this step is so crucial for the hybrids to work correctly, it casts doubt on whether it is possible to build a broadband hybrid that requires a very precise structure made of four fibers. Despite this, the results show that such a component is possible and the question is only about how to build it.
Clark, Melody S; Thorne, Michael AS; Purać, Jelena; Burns, Gavin; Hillyard, Guy; Popović, Željko D; Grubor-Lajšić, Gordana; Worland, M Roger
2009-01-01
Background Insects provide tractable models for enhancing our understanding of the physiological and cellular processes that enable survival at extreme low temperatures. They possess three main strategies to survive the cold: freeze tolerance, freeze avoidance or cryoprotective dehydration, of which the latter method is exploited by our model species, the Arctic springtail Megaphorura arctica, formerly Onychiurus arcticus (Tullberg 1876). The physiological mechanisms underlying cryoprotective dehydration have been well characterised in M. arctica and to date this process has been described in only a few other species: the Antarctic nematode Panagrolaimus davidi, an enchytraied worm, the larvae of the Antarctic midge Belgica antarctica and the cocoons of the earthworm Dendrobaena octaedra. There are no in-depth molecular studies on the underlying cold survival mechanisms in any species. Results A cDNA microarray was generated using 6,912 M. arctica clones printed in duplicate. Analysis of clones up-regulated during dehydration procedures (using both cold- and salt-induced dehydration) has identified a number of significant cellular processes, namely the production and mobilisation of trehalose, protection of cellular systems via small heat shock proteins and tissue/cellular remodelling during the dehydration process. Energy production, initiation of protein translation and cell division, plus potential tissue repair processes dominate genes identified during recovery. Heat map analysis identified a duplication of the trehalose-6-phosphate synthase (TPS) gene in M. arctica and also 53 clones co-regulated with TPS, including a number of membrane associated and cell signalling proteins. Q-PCR on selected candidate genes has also contributed to our understanding with glutathione-S-transferase identified as the major antioxdidant enzyme protecting the cells during these stressful procedures, and a number of protein kinase signalling molecules involved in recovery. Conclusion Microarray analysis has proved to be a powerful technique for understanding the processes and genes involved in cryoprotective dehydration, beyond the few candidate genes identified in the current literature. Dehydration is associated with the mobilisation of trehalose, cell protection and tissue remodelling. Energy production, leading to protein production, and cell division characterise the recovery process. Novel membrane proteins, along with aquaporins and desaturases, have been identified as promising candidates for future functional analyses to better understand membrane remodelling during cellular dehydration. PMID:19622137
Eum, Hyung-Il; Gachon, Philippe; Laprise, René
2016-01-01
This study examined the impact of model biases on climate change signals for daily precipitation and for minimum and maximum temperatures. Through the use of multiple climate scenarios from 12 regional climate model simulations, the ensemble mean, and three synthetic simulations generated by a weighting procedure, we investigated intermodel seasonal climate change signals between current and future periods, for both median and extreme precipitation/temperature values. A significant dependence of seasonal climate change signals on the model biases over southern Québec in Canada was detected for temperatures, but not for precipitation. This suggests that the regional temperature change signal is affectedmore » by local processes. Seasonally, model bias affects future mean and extreme values in winter and summer. In addition, potentially large increases in future extremes of temperature and precipitation values were projected. For three synthetic scenarios, systematically less bias and a narrow range of mean change for all variables were projected compared to those of climate model simulations. In addition, synthetic scenarios were found to better capture the spatial variability of extreme cold temperatures than the ensemble mean scenario. Finally, these results indicate that the synthetic scenarios have greater potential to reduce the uncertainty of future climate projections and capture the spatial variability of extreme climate events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eum, Hyung-Il; Gachon, Philippe; Laprise, René
This study examined the impact of model biases on climate change signals for daily precipitation and for minimum and maximum temperatures. Through the use of multiple climate scenarios from 12 regional climate model simulations, the ensemble mean, and three synthetic simulations generated by a weighting procedure, we investigated intermodel seasonal climate change signals between current and future periods, for both median and extreme precipitation/temperature values. A significant dependence of seasonal climate change signals on the model biases over southern Québec in Canada was detected for temperatures, but not for precipitation. This suggests that the regional temperature change signal is affectedmore » by local processes. Seasonally, model bias affects future mean and extreme values in winter and summer. In addition, potentially large increases in future extremes of temperature and precipitation values were projected. For three synthetic scenarios, systematically less bias and a narrow range of mean change for all variables were projected compared to those of climate model simulations. In addition, synthetic scenarios were found to better capture the spatial variability of extreme cold temperatures than the ensemble mean scenario. Finally, these results indicate that the synthetic scenarios have greater potential to reduce the uncertainty of future climate projections and capture the spatial variability of extreme climate events.« less
Al-Kadi, Mahmoud I.; Reaz, Mamun Bin Ibne; Ali, Mohd Alauddin Mohd; Liu, Chian Yong
2014-01-01
This paper presents a comparison between the electroencephalogram (EEG) channels during scoliosis correction surgeries. Surgeons use many hand tools and electronic devices that directly affect the EEG channels. These noises do not affect the EEG channels uniformly. This research provides a complete system to find the least affected channel by the noise. The presented system consists of five stages: filtering, wavelet decomposing (Level 4), processing the signal bands using four different criteria (mean, energy, entropy and standard deviation), finding the useful channel according to the criteria's value and, finally, generating a combinational signal from Channels 1 and 2. Experimentally, two channels of EEG data were recorded from six patients who underwent scoliosis correction surgeries in the Pusat Perubatan Universiti Kebangsaan Malaysia (PPUKM) (the Medical center of National University of Malaysia). The combinational signal was tested by power spectral density, cross-correlation function and wavelet coherence. The experimental results show that the system-outputted EEG signals are neatly switched without any substantial changes in the consistency of EEG components. This paper provides an efficient procedure for analyzing EEG signals in order to avoid averaging the channels that lead to redistribution of the noise on both channels, reducing the dimensionality of the EEG features and preparing the best EEG stream for the classification and monitoring stage. PMID:25051031
Micro-Pulse Lidar Signals: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)
2002-01-01
Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Imaging Local Ca2+ Signals in Cultured Mammalian Cells
Lock, Jeffrey T.; Ellefsen, Kyle L.; Settle, Bret; Parker, Ian; Smith, Ian F.
2015-01-01
Cytosolic Ca2+ ions regulate numerous aspects of cellular activity in almost all cell types, controlling processes as wide-ranging as gene transcription, electrical excitability and cell proliferation. The diversity and specificity of Ca2+ signaling derives from mechanisms by which Ca2+ signals are generated to act over different time and spatial scales, ranging from cell-wide oscillations and waves occurring over the periods of minutes to local transient Ca2+ microdomains (Ca2+ puffs) lasting milliseconds. Recent advances in electron multiplied CCD (EMCCD) cameras now allow for imaging of local Ca2+ signals with a 128 x 128 pixel spatial resolution at rates of >500 frames sec-1 (fps). This approach is highly parallel and enables the simultaneous monitoring of hundreds of channels or puff sites in a single experiment. However, the vast amounts of data generated (ca. 1 Gb per min) render visual identification and analysis of local Ca2+ events impracticable. Here we describe and demonstrate the procedures for the acquisition, detection, and analysis of local IP3-mediated Ca2+ signals in intact mammalian cells loaded with Ca2+ indicators using both wide-field epi-fluorescence (WF) and total internal reflection fluorescence (TIRF) microscopy. Furthermore, we describe an algorithm developed within the open-source software environment Python that automates the identification and analysis of these local Ca2+ signals. The algorithm localizes sites of Ca2+ release with sub-pixel resolution; allows user review of data; and outputs time sequences of fluorescence ratio signals together with amplitude and kinetic data in an Excel-compatible table. PMID:25867132
Inductive interference in rapid transit signaling systems. volume 2. suggested test procedures.
DOT National Transportation Integrated Search
1987-03-31
These suggested test procedures have been prepared in order to develop standard methods of analysis and testing to quantify and resolve issues of electromagnetic compatibility in rail transit operations. Electromagnetic interference, generated by rai...
Energy Reconstruction for Events Detected in TES X-ray Detectors
NASA Astrophysics Data System (ADS)
Ceballos, M. T.; Cardiel, N.; Cobo, B.
2015-09-01
The processing of the X-ray events detected by a TES (Transition Edge Sensor) device (such as the one that will be proposed in the ESA AO call for instruments for the Athena mission (Nandra et al. 2013) as a high spectral resolution instrument, X-IFU (Barret et al. 2013)), is a several step procedure that starts with the detection of the current pulses in a noisy signal and ends up with their energy reconstruction. For this last stage, an energy calibration process is required to convert the pseudo energies measured in the detector to the real energies of the incoming photons, accounting for possible nonlinearity effects in the detector. We present the details of the energy calibration algorithm we implemented as the last part of the Event Processing software that we are developing for the X-IFU instrument, that permits the calculation of the calibration constants in an analytical way.
Forest fire autonomous decision system based on fuzzy logic
NASA Astrophysics Data System (ADS)
Lei, Z.; Lu, Jianhua
2010-11-01
The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of the tests performed, and the test data, for the AI METSAT Signal Processor Assembly P/N 1331670-2, S/N F05. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of tests performed, and the test data, for the A1 METSAT Signal Processor Assembly PN: 1331679-2, S/N F03. This assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.
Electromagnetic spectrum management system
Seastrand, Douglas R.
2017-01-31
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process the unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.
NASA Astrophysics Data System (ADS)
Wang, Y. S.; Shen, G. Q.; Guo, H.; Tang, X. L.; Hamade, T.
2013-08-01
In this paper, a roughness model, which is based on human auditory perception (HAP) and known as HAP-RM, is developed for the sound quality evaluation (SQE) of vehicle noise. First, the interior noise signals are measured for a sample vehicle and prepared for roughness modelling. The HAP-RM model is based on the process of sound transfer and perception in the human auditory system by combining the structural filtering function and nonlinear perception characteristics of the ear. The HAP-RM model is applied to the measured vehicle interior noise signals by considering the factors that affect hearing, such as the modulation and carrier frequencies, the time and frequency maskings and the correlations of the critical bands. The HAP-RM model is validated by jury tests. An anchor-scaled scoring method (ASM) is used for subjective evaluations in the jury tests. The verification results show that the novel developed model can accurately calculate vehicle noise roughness below 0.6 asper. Further investigation shows that the total roughness of the vehicle interior noise can mainly be attributed to frequency components below 12 Bark. The time masking effects of the modelling procedure enable the application of the HAP-RM model to stationary and nonstationary vehicle noise signals and the SQE of other sound-related signals in engineering problems.
Fabrication and surface-enhanced Raman scattering (SERS) of Ag/Au bimetallic films on Si substrates
NASA Astrophysics Data System (ADS)
Wang, Chaonan; Fang, Jinghuai; Jin, Yonglong; Cheng, Mingfei
2011-11-01
Ag films on Si substrates were fabricated by immersion plating and served as sacrificial materials for preparation of Ag/Au bimetallic films by galvanic replacement reaction. The formation procedure of films on the surface of Si was studied by scanning electron microscopy (SEM), which revealed Ag films with island and dendritic morphologies experienced novel structural evolution process during galvanic replacement reaction, and nanostructures with holes were produced within the resultant Ag/Au bimetallic films. SERS activity both of sacrificial Ag films and resultant Ag/Au bimetallic films was investigated by using crystal violet as an analyte. It has been shown that SERS signals increased with the process of galvanic substitution and reached intensity significantly stronger than that obtained from pure Ag films.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
A simple method for the computation of first neighbour frequencies of DNAs from CD spectra
Marck, Christian; Guschlbauer, Wilhelm
1978-01-01
A procedure for the computation of the first neighbour frequencies of DNA's is presented. This procedure is based on the first neighbour approximation of Gray and Tinoco. We show that the knowledge of all the ten elementary CD signals attached to the ten double stranded first neighbour configurations is not necessary. One can obtain the ten frequencies of an unknown DNA with the use of eight elementary CD signals corresponding to eight linearly independent polymer sequences. These signals can be extracted very simply from any eight or more CD spectra of double stranded DNA's of known frequencies. The ten frequencies of a DNA are obtained by least square fit of its CD spectrum with these elementary signals. One advantage of this procedure is that it does not necessitate linear programming, it can be used with CD data digitalized using a large number of wavelengths, thus permitting an accurate resolution of the CD spectra. Under favorable case, the ten frequencies of a DNA (not used as input data) can be determined with an average absolute error < 2%. We have also observed that certain satellite DNA's, those of Drosophila virilis and Callinectes sapidus have CD spectra compatible with those of DNA's of quasi random sequence; these satellite DNA's should adopt also the B-form in solution. PMID:673843
DOT National Transportation Integrated Search
1996-04-01
THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.
DOT National Transportation Integrated Search
2012-01-01
Traffic signal systems represent a substantial component of the highway transportation network in the United : States. It is challenging for most agencies to find engineering resources to properly update signal policies and : timing plans to accommod...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-22
.... On its domestic sales, Signal would be able to choose the duty rate during customs entry procedures... foreign origin steel mill products (e.g., angles, pipe, plate), which requires that all applicable duties...
40-Gbps optical backbone network deep packet inspection based on FPGA
NASA Astrophysics Data System (ADS)
Zuo, Yuan; Huang, Zhiping; Su, Shaojing
2014-11-01
In the era of information, the big data, which contains huge information, brings about some problems, such as high speed transmission, storage and real-time analysis and process. As the important media for data transmission, the Internet is the significant part for big data processing research. With the large-scale usage of the Internet, the data streaming of network is increasing rapidly. The speed level in the main fiber optic communication of the present has reached 40Gbps, even 100Gbps, therefore data on the optical backbone network shows some features of massive data. Generally, data services are provided via IP packets on the optical backbone network, which is constituted with SDH (Synchronous Digital Hierarchy). Hence this method that IP packets are directly mapped into SDH payload is named POS (Packet over SDH) technology. Aiming at the problems of real time process of high speed massive data, this paper designs a process system platform based on ATCA for 40Gbps POS signal data stream recognition and packet content capture, which employs the FPGA as the CPU. This platform offers pre-processing of clustering algorithms, service traffic identification and data mining for the following big data storage and analysis with high efficiency. Also, the operational procedure is proposed in this paper. Four channels of 10Gbps POS signal decomposed by the analysis module, which chooses FPGA as the kernel, are inputted to the flow classification module and the pattern matching component based on TCAM. Based on the properties of the length of payload and net flows, buffer management is added to the platform to keep the key flow information. According to data stream analysis, DPI (deep packet inspection) and flow balance distribute, the signal is transmitted to the backend machine through the giga Ethernet ports on back board. Practice shows that the proposed platform is superior to the traditional applications based on ASIC and NP.
Electromagnetic spectrum management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seastrand, Douglas R.
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process themore » unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.« less
One-year-old fear memories rapidly activate human fusiform gyrus
Pizzagalli, Diego A.
2016-01-01
Fast threat detection is crucial for survival. In line with such evolutionary pressure, threat-signaling fear-conditioned faces have been found to rapidly (<80 ms) activate visual brain regions including the fusiform gyrus on the conditioning day. Whether remotely fear conditioned stimuli (CS) evoke similar early processing enhancements is unknown. Here, 16 participants who underwent a differential face fear-conditioning and extinction procedure on day 1 were presented the initial CS 24 h after conditioning (Recent Recall Test) as well as 9-17 months later (Remote Recall Test) while EEG was recorded. Using a data-driven segmentation procedure of CS evoked event-related potentials, five distinct microstates were identified for both the recent and the remote memory test. To probe intracranial activity, EEG activity within each microstate was localized using low resolution electromagnetic tomography analysis (LORETA). In both the recent (41–55 and 150–191 ms) and remote (45–90 ms) recall tests, fear conditioned faces potentiated rapid activation in proximity of fusiform gyrus, even in participants unaware of the contingencies. These findings suggest that rapid processing enhancements of conditioned faces persist over time. PMID:26416784
Response-cue interval effects in extended-runs task switching: memory, or monitoring?
Altmann, Erik M
2017-09-26
This study investigated effects of manipulating the response-cue interval (RCI) in the extended-runs task-switching procedure. In this procedure, a task cue is presented at the start of a run of trials and then withdrawn, such that the task has to be stored in memory to guide performance until the next task cue is presented. The effects of the RCI manipulation were not as predicted by an existing model of memory processes in task switching (Altmann and Gray, Psychol Rev 115:602-639, 2008), suggesting that either the model is incorrect or the RCI manipulation did not have the intended effect. The manipulation did produce a theoretically meaningful pattern, in the form of a main effect on response time that was not accompanied by a similar effect on the error rate. This pattern, which replicated across two experiments, is interpreted here in terms of a process that monitors for the next task cue, with a longer RCI acting as a stronger signal that a cue is about to appear. The results have implications for the human factors of dynamic task environments in which critical events occur unpredictably.
Liu, Keyin; Kong, Xiuqi; Ma, Yanyan; Lin, Weiying
2018-05-01
Carbon monoxide (CO) is a key gaseous signaling molecule in living cells and organisms. This protocol illustrates the synthesis of a highly sensitive Nile Red (NR)-Pd-based fluorescent probe, NR-PdA, and its applications for detecting endogenous CO in tissue culture cells, ex vivo organs, and zebrafish embryos. In the NR-PdA synthesis process, 3-diethylamine phenol reacts with sodium nitrite in the acidic condition to afford 5-(diethylamino)-2-nitrosophenol hydrochloride (compound 1), which is further treated with 1-naphthalenol at a high temperature to provide the NR dye via a cyclization reaction. Finally, NR is reacted with palladium acetate to obtain the desired Pd-based fluorescent probe NR-PdA. NR-PdA possesses excellent two-photon excitation and near-IR emission properties, high stability, low background fluorescence, and a low detection limit. In addition to the chemical synthesis procedures, we provide step-by-step procedures for imaging endogenous CO in RAW 264.7 cells, mouse organs ex vivo, and live zebrafish embryos. The synthesis process for the probe requires ∼4 d, and the biological imaging experiments take ∼14 d.
2015-01-01
We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1–100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets. PMID:25411686
Power cepstrum technique with application to model helicopter acoustic data
NASA Technical Reports Server (NTRS)
Martin, R. M.; Burley, C. L.
1986-01-01
The application of the power cepstrum to measured helicopter-rotor acoustic data is investigated. A previously applied correction to the reconstructed spectrum is shown to be incorrect. For an exact echoed signal, the amplitude of the cepstrum echo spike at the delay time is linearly related to the echo relative amplitude in the time domain. If the measured spectrum is not entirely from the source signal, the cepstrum will not yield the desired echo characteristics and a cepstral aliasing may occur because of the effective sample rate in the frequency domain. The spectral analysis bandwidth must be less than one-half the echo ripple frequency or cepstral aliasing can occur. The power cepstrum editing technique is a useful tool for removing some of the contamination because of acoustic reflections from measured rotor acoustic spectra. The cepstrum editing yields an improved estimate of the free field spectrum, but the correction process is limited by the lack of accurate knowledge of the echo transfer function. An alternate procedure, which does not require cepstral editing, is proposed which allows the complete correction of a contaminated spectrum through use of both the transfer function and delay time of the echo process.
Ipsen, Andreas
2015-02-03
Despite the widespread use of mass spectrometry (MS) in a broad range of disciplines, the nature of MS data remains very poorly understood, and this places important constraints on the quality of MS data analysis as well as on the effectiveness of MS instrument design. In the following, a procedure for calculating the statistical distribution of the mass peak intensity for MS instruments that use analog-to-digital converters (ADCs) and electron multipliers is presented. It is demonstrated that the physical processes underlying the data-generation process, from the generation of the ions to the signal induced at the detector, and on to the digitization of the resulting voltage pulse, result in data that can be well-approximated by a Gaussian distribution whose mean and variance are determined by physically meaningful instrumental parameters. This allows for a very precise understanding of the signal-to-noise ratio of mass peak intensities and suggests novel ways of improving it. Moreover, it is a prerequisite for being able to address virtually all data analytical problems in downstream analyses in a statistically rigorous manner. The model is validated with experimental data.
French, William R; Zimmerman, Lisa J; Schilling, Birgit; Gibson, Bradford W; Miller, Christine A; Townsend, R Reid; Sherrod, Stacy D; Goodwin, Cody R; McLean, John A; Tabb, David L
2015-02-06
We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1-100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets.
Design, fabrication and skin-electrode contact analysis of polymer microneedle-based ECG electrodes
NASA Astrophysics Data System (ADS)
O'Mahony, Conor; Grygoryev, Konstantin; Ciarlone, Antonio; Giannoni, Giuseppe; Kenthao, Anan; Galvin, Paul
2016-08-01
Microneedle-based ‘dry’ electrodes have immense potential for use in diagnostic procedures such as electrocardiography (ECG) analysis, as they eliminate several of the drawbacks associated with the conventional ‘wet’ electrodes currently used for physiological signal recording. To be commercially successful in such a competitive market, it is essential that dry electrodes are manufacturable in high volumes and at low cost. In addition, the topographical nature of these emerging devices means that electrode performance is likely to be highly dependent on the quality of the skin-electrode contact. This paper presents a low-cost, wafer-level micromoulding technology for the fabrication of polymeric ECG electrodes that use microneedle structures to make a direct electrical contact to the body. The double-sided moulding process can be used to eliminate post-process via creation and wafer dicing steps. In addition, measurement techniques have been developed to characterize the skin-electrode contact force. We perform the first analysis of signal-to-noise ratio dependency on contact force, and show that although microneedle-based electrodes can outperform conventional gel electrodes, the quality of ECG recordings is significantly dependent on temporal and mechanical aspects of the skin-electrode interface.
Modeling Fan Effects on the Time Course of Associative Recognition
Schneider, Darryl W.; Anderson, John R.
2011-01-01
We investigated the time course of associative recognition using the response signal procedure, whereby a stimulus is presented and followed after a variable lag by a signal indicating that an immediate response is required. More specifically, we examined the effects of associative fan (the number of associations that an item has with other items in memory) on speed–accuracy tradeoff functions obtained in a previous response signal experiment involving briefly studied materials and in a new experiment involving well-learned materials. High fan lowered asymptotic accuracy or the rate of rise in accuracy across lags, or both. We developed an Adaptive Control of Thought–Rational (ACT-R) model for the response signal procedure to explain these effects. The model assumes that high fan results in weak associative activation that slows memory retrieval, thereby decreasing the probability that retrieval finishes in time and producing a speed–accuracy tradeoff function. The ACT-R model provided an excellent account of the data, yielding quantitative fits that were as good as those of the best descriptive model for response signal data. PMID:22197797
Jan, Shau-Shiun; Sun, Chih-Cheng
2010-01-01
The detection of low received power of global positioning system (GPS) signals in the signal acquisition process is an important issue for GPS applications. Improving the miss-detection problem of low received power signal is crucial, especially for urban or indoor environments. This paper proposes a signal existence verification (SEV) process to detect and subsequently verify low received power GPS signals. The SEV process is based on the time-frequency representation of GPS signal, and it can capture the characteristic of GPS signal in the time-frequency plane to enhance the GPS signal acquisition performance. Several simulations and experiments are conducted to show the effectiveness of the proposed method for low received power signal detection. The contribution of this work is that the SEV process is an additional scheme to assist the GPS signal acquisition process in low received power signal detection, without changing the original signal acquisition or tracking algorithms.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro
2010-08-01
In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
The principles of quantification applied to in vivo proton MR spectroscopy.
Helms, Gunther
2008-08-01
Following the identification of metabolite signals in the in vivo MR spectrum, quantification is the procedure to estimate numerical values of their concentrations. The two essential steps are discussed in detail: analysis by fitting a model of prior knowledge, that is, the decomposition of the spectrum into the signals of singular metabolites; then, normalization of these signals to yield concentration estimates. Special attention is given to using the in vivo water signal as internal reference.
NASA Technical Reports Server (NTRS)
Lund, D.
1998-01-01
This report presents a description of the tests performed, and the test data, for the A1 METSAT Signal Processor Assembly PN: 1331679-2, S/N F04. The assembly was tested in accordance with AE-26754, "METSAT Signal Processor Scan Drive Test and Integration Procedure." The objective is to demonstrate functionality of the signal processor prior to instrument integration.