Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai
2009-09-01
The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.
Epstein, F H; Mugler, J P; Brookeman, J R
1994-02-01
A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Strahl, Stefan; Mertins, Alfred
2008-07-18
Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.
Optimal wavelets for biomedical signal compression.
Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario
2006-07-01
Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.
NASA Astrophysics Data System (ADS)
Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.
2014-11-01
IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.
Optimal experimental design for parameter estimation of a cell signaling model.
Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias
2009-11-01
Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.
NASA Astrophysics Data System (ADS)
Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua
2018-02-01
For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.
Performance comparison of extracellular spike sorting algorithms for single-channel recordings.
Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert
2012-01-30
Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.
Multidimensional Optimization of Signal Space Distance Parameters in WLAN Positioning
Brković, Milenko; Simić, Mirjana
2014-01-01
Accurate indoor localization of mobile users is one of the challenging problems of the last decade. Besides delivering high speed Internet, Wireless Local Area Network (WLAN) can be used as an effective indoor positioning system, being competitive both in terms of accuracy and cost. Among the localization algorithms, nearest neighbor fingerprinting algorithms based on Received Signal Strength (RSS) parameter have been extensively studied as an inexpensive solution for delivering indoor Location Based Services (LBS). In this paper, we propose the optimization of the signal space distance parameters in order to improve precision of WLAN indoor positioning, based on nearest neighbor fingerprinting algorithms. Experiments in a real WLAN environment indicate that proposed optimization leads to substantial improvements of the localization accuracy. Our approach is conceptually simple, is easy to implement, and does not require any additional hardware. PMID:24757443
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
NASA Astrophysics Data System (ADS)
Fernandes, Virgínia C.; Vera, Jose L.; Domingues, Valentina F.; Silva, Luís M. S.; Mateus, Nuno; Delerue-Matos, Cristina
2012-12-01
Multiclass analysis method was optimized in order to analyze pesticides traces by gas chromatography with ion-trap and tandem mass spectrometry (GC-MS/MS). The influence of some analytical parameters on pesticide signal response was explored. Five ion trap mass spectrometry (IT-MS) operating parameters, including isolation time (IT), excitation voltage (EV), excitation time (ET), maximum excitation energy or " q" value (q), and isolation mass window (IMW) were numerically tested in order to maximize the instrument analytical signal response. For this, multiple linear regression was used in data analysis to evaluate the influence of the five parameters on the analytical response in the ion trap mass spectrometer and to predict its response. The assessment of the five parameters based on the regression equations substantially increased the sensitivity of IT-MS/MS in the MS/MS mode. The results obtained show that for most of the pesticides, these parameters have a strong influence on both signal response and detection limit. Using the optimized method, a multiclass pesticide analysis was performed for 46 pesticides in a strawberry matrix. Levels higher than the limit established for strawberries by the European Union were found in some samples.
Prediction of acoustic feature parameters using myoelectric signals.
Lee, Ki-Seung
2010-07-01
It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian
2018-03-20
The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.
Parameter tuning method for dither compensation of a pneumatic proportional valve with friction
NASA Astrophysics Data System (ADS)
Wang, Tao; Song, Yang; Huang, Leisheng; Fan, Wei
2016-05-01
In the practical application of pneumatic control devices, the nonlinearity of a pneumatic control valve become the main factor affecting the control effect, which comes mainly from the dynamic friction force. The dynamic friction inside the valve may cause hysteresis and a dead zone. In this paper, a dither compensation mechanism is proposed to reduce negative effects on the basis of analyzing the mechanism of friction force. The specific dither signal (using a sinusoidal signal) was superimposed on the control signal of the valve. Based on the relationship between the parameters of the dither signal and the inherent characteristics of the proportional servo valve, a parameter tuning method was proposed, which uses a displacement sensor to measure the maximum static friction inside the valve. According to the experimental results, the proper amplitude ranges are determined for different pressures. In order to get the optimal parameters of the dither signal, some dither compensation experiments have been carried out on different signal amplitude and gas pressure conditions. Optimal parameters are determined under two kinds of pressure conditions. Using tuning parameters the valve spool displacement experiment has been taken. From the experiment results, hysteresis of the proportional servo valve is significantly reduced. And through simulation and experiments, the cut-off frequency of the proportional valve has also been widened. Therefore after adding the dither signal, the static and dynamic characteristics of the proportional valve are both improved to a certain degree. This research proposes a parameter tuning method of dither signal, and the validity of the method is verified experimentally.
Optimization of the Design of Pre-Signal System Using Improved Cellular Automaton
Li, Yan; Li, Ke; Tao, Siran; Chen, Kuanmin
2014-01-01
The pre-signal system can improve the efficiency of intersection approach under rational design. One of the main obstacles in optimizing the design of pre-signal system is that driving behaviors in the sorting area cannot be well evaluated. The NaSch model was modified by considering slow probability, turning-deceleration rules, and lane changing rules. It was calibrated with field observed data to explore the interactions among design parameters. The simulation results of the proposed model indicate that the length of sorting area, traffic demand, signal timing, and lane allocation are the most important influence factors. The recommendations of these design parameters are demonstrated. The findings of this paper can be foundations for the design of pre-signal system and show promising improvement in traffic mobility. PMID:25435871
Parameter optimization for reproducible cardiac 1 H-MR spectroscopy at 3 Tesla.
de Heer, Paul; Bizino, Maurice B; Lamb, Hildo J; Webb, Andrew G
2016-11-01
To optimize data acquisition parameters in cardiac proton MR spectroscopy, and to evaluate the intra- and intersession variability in myocardial triglyceride content. Data acquisition parameters at 3 Tesla (T) were optimized and reproducibility measured using, in total, 49 healthy subjects. The signal-to-noise-ratio (SNR) and the variance in metabolite amplitude between averages were measured for: (i) global versus local power optimization; (ii) static magnetic field (B 0 ) shimming performed during free-breathing or within breathholds; (iii) post R-wave peak measurement times between 50 and 900 ms; (iv) without respiratory compensation, with breathholds and with navigator triggering; and (v) frequency selective excitation, Chemical Shift Selective (CHESS) and Multiply Optimized Insensitive Suppression Train (MOIST) water suppression techniques. Using the optimized parameters intra- and intersession myocardial triglyceride content reproducibility was measured. Two cardiac proton spectra were acquired with the same parameters and compared (intrasession reproducibility) after which the subject was removed from the scanner and placed back in the scanner and a third spectrum was acquired which was compared with the first measurement (intersession reproducibility). Local power optimization increased SNR on average by 22% compared with global power optimization (P = 0.0002). The average linewidth was not significantly different for pencil beam B 0 shimming using free-breathing or breathholds (19.1 Hz versus 17.5 Hz; P = 0.15). The highest signal stability occurred at a cardiac trigger delay around 240 ms. The mean amplitude variation was significantly lower for breathholds versus free-breathing (P = 0.03) and for navigator triggering versus free-breathing (P = 0.03) as well as for navigator triggering versus breathhold (P = 0.02). The mean residual water signal using CHESS (1.1%, P = 0.01) or MOIST (0.7%, P = 0.01) water suppression was significantly lower than using frequency selective excitation water suppression (7.0%). Using the optimized parameters an intrasession limits of agreement of the myocardial triglyceride content of -0.11% to +0.04%, and an intersession of -0.15% to +0.9%, were achieved. The coefficient of variation was 5% for the intrasession reproducibility and 6.5% for the intersession reproducibility. Using approaches designed to optimize SNR and minimize the variation in inter-average signal intensities and frequencies/phases, a protocol was developed to perform cardiac MR spectroscopy on a clinical 3T system with high reproducibility. J. Magn. Reson. Imaging 2016;44:1151-1158. © 2016 International Society for Magnetic Resonance in Medicine.
Selection of fluorescence lidar operating parameters for SNR maximization
NASA Technical Reports Server (NTRS)
Heaps, W. S.
1981-01-01
Fluorescence lidar when applicable offers one of the most sensitive methods for measuring the concentration of trace constituents of the atmosphere. In the conduct of a fluorescence lidar experiment, a number of parameters which can be used to optimize the SNR can be controlled. In this paper the optimum division of laser pulses centered on and off the fluorescence excitation wavelength is calculated as a function of the ratio of the fluorescence signal strength to the strength of fluorescence from interfering species. For strong interference signals the time should be divided equally on and off the line. For strong fluorescence signals the time on line is proportional to the square root of the on-line off-line signal ratio. The optimization of the integration time for varying values of signal-to-background and signal-to-interference ratios, atmospheric attenuation, laser energy variations, background measurement time, and on-line off-line time division is also considered.
An optimized ensemble local mean decomposition method for fault detection of mechanical components
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Optimizing signal recycling for detecting a stochastic gravitational-wave background
NASA Astrophysics Data System (ADS)
Tao, Duo; Christensen, Nelson
2018-06-01
Signal recycling is applied in laser interferometers such as the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) to increase their sensitivity to gravitational waves. In this study, signal recycling configurations for detecting a stochastic gravitational wave background are optimized based on aLIGO parameters. Optimal transmission of the signal recycling mirror (SRM) and detuning phase of the signal recycling cavity under a fixed laser power and low-frequency cutoff are calculated. Based on the optimal configurations, the compatibility with a binary neutron star (BNS) search is discussed. Then, different laser powers and low-frequency cutoffs are considered. Two models for the dimensionless energy density of gravitational waves , the flat model and the model, are studied. For a stochastic background search, it is found that an interferometer using signal recycling has a better sensitivity than an interferometer not using it. The optimal stochastic search configurations are typically found when both the SRM transmission and the signal recycling detuning phase are low. In this region, the BNS range mostly lies between 160 and 180 Mpc. When a lower laser power is used the optimal signal recycling detuning phase increases, the optimal SRM transmission increases and the optimal sensitivity improves. A reduced low-frequency cutoff gives a better sensitivity limit. For both models of , a typical optimal sensitivity limit on the order of 10‑10 is achieved at a reference frequency of Hz.
Hwang, Bosun; You, Jiwoo; Vaessen, Thomas; Myin-Germeys, Inez; Park, Cheolsoo; Zhang, Byoung-Tak
2018-02-08
Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process. This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods. The Deep ECGNet was developed through various experiments and analysis of ECG waveforms. We proposed the optimal recurrent and convolutional neural networks architecture, and also the optimal convolution filter length (related to the P, Q, R, S, and T wave durations of ECG) and pooling length (related to the heart beat period) based on the optimization experiments and analysis on the waveform characteristics of ECG signals. The experiments were also conducted with conventional methods using HRV parameters and frequency features as a benchmark test. The data used in this study were obtained from Kwangwoon University in Korea (13 subjects, Case 1) and KU Leuven University in Belgium (9 subjects, Case 2). Experiments were designed according to various experimental protocols to elicit stressful conditions. The proposed framework to recognize stress conditions, the Deep ECGNet, outperformed the conventional approaches with the highest accuracy of 87.39% for Case 1 and 73.96% for Case 2, respectively, that is, 16.22% and 10.98% improvements compared with those of the conventional HRV method. We proposed an optimal deep learning architecture and its parameters for stress recognition, and the theoretical consideration on how to design the deep learning structure based on the periodic patterns of the raw ECG data. Experimental results in this study have proved that the proposed deep learning model, the Deep ECGNet, is an optimal structure to recognize the stress conditions using ultra short-term ECG data.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
MIMO-OFDM signal optimization for SAR imaging radar
NASA Astrophysics Data System (ADS)
Baudais, J.-Y.; Méric, S.; Riché, V.; Pottier, É.
2016-12-01
This paper investigates the optimization of the coded orthogonal frequency division multiplexing (OFDM) transmitted signal in a synthetic aperture radar (SAR) context. We propose to design OFDM signals to achieve range ambiguity mitigation. Indeed, range ambiguities are well known to be a limitation for SAR systems which operates with pulsed transmitted signal. The ambiguous reflected signal corresponding to one pulse is then detected when the radar has already transmitted the next pulse. In this paper, we demonstrate that the range ambiguity mitigation is possible by using orthogonal transmitted wave as OFDM pulses. The coded OFDM signal is optimized through genetic optimization procedures based on radar image quality parameters. Moreover, we propose to design a multiple-input multiple-output (MIMO) configuration to enhance the noise robustness of a radar system and this configuration is mainly efficient in the case of using orthogonal waves as OFDM pulses. The results we obtain show that OFDM signals outperform conventional radar chirps for range ambiguity suppression and for robustness enhancement in 2 ×2 MIMO configuration.
Carl, Michael; Bydder, Graeme M; Du, Jiang
2016-08-01
The long repetition time and inversion time with inversion recovery preparation ultrashort echo time (UTE) often causes prohibitively long scan times. We present an optimized method for long T2 signal suppression in which several k-space spokes are acquired after each inversion preparation. Using Bloch equations the sequence parameters such as TI and flip angle were optimized to suppress the long T2 water and fat signals and to maximize short T2 contrast. Volunteer imaging was performed on a healthy male volunteer. Inversion recovery preparation was performed using a Silver-Hoult adiabatic inversion pulse together with a three-dimensional (3D) UTE (3D Cones) acquisition. The theoretical signal curves generally agreed with the experimentally measured region of interest curves. The multispoke inversion recovery method showed good muscle and fatty bone marrow suppression, and highlighted short T2 signals such as these from the femoral and tibial cortex. Inversion recovery 3D UTE imaging with multiple spoke acquisitions can be used to effectively suppress long T2 signals and highlight short T2 signals within clinical scan times. Theoretical modeling can be used to determine sequence parameters to optimize long T2 signal suppression and maximize short T2 signals. Experimental results on a volunteer confirmed the theoretical predictions. Magn Reson Med 76:577-582, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition
NASA Technical Reports Server (NTRS)
Hui, A.; Blosiu, J. O.; Wiberg, D. V.
1998-01-01
Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.
Optimizing Methods of Obtaining Stellar Parameters for the H3 Survey
NASA Astrophysics Data System (ADS)
Ivory, KeShawn; Conroy, Charlie; Cargile, Phillip
2018-01-01
The Stellar Halo at High Resolution with Hectochelle Survey (H3) is in the process of observing and collecting stellar parameters for stars in the Milky Way's halo. With a goal of measuring radial velocities for fainter stars, it is crucial that we have optimal methods of obtaining this and other parameters from the data from these stars.The method currently developed is The Payne, named after Cecilia Payne-Gaposchkin, a code that uses neural networks and Markov Chain Monte Carlo methods to utilize both spectra and photometry to obtain values for stellar parameters. This project was to investigate the benefit of fitting both spectra and spectral energy distributions (SED). Mock spectra using the parameters of the Sun were created and noise was inserted at various signal to noise values. The Payne then fit each mock spectrum with and without a mock SED also generated from solar parameters. The result was that at high signal to noise, the spectrum dominated and the effect of fitting the SED was minimal. But at low signal to noise, the addition of the SED greatly decreased the standard deviation of the data and resulted in more accurate values for temperature and metallicity.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
Parametric optimization of optical signal detectors employing the direct photodetection scheme
NASA Astrophysics Data System (ADS)
Kirakosiants, V. E.; Loginov, V. A.
1984-08-01
The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan
2016-01-01
Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
NASA Astrophysics Data System (ADS)
Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei
2007-05-01
The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.
Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator
NASA Technical Reports Server (NTRS)
Lottman, B.; Frehlich, R.
1997-01-01
The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.
Lemonakis, Nikolaos; Skaltsounis, Alexios-Leandros; Tsarbopoulos, Anthony; Gikas, Evagelos
2016-01-15
A multistage optimization of all the parameters affecting detection/response in an LTQ-orbitrap analyzer was performed, using a design of experiments methodology. The signal intensity, a critical issue for mass analysis, was investigated and the optimization process was completed in three successive steps, taking into account the three main regions of an orbitrap, the ion generation, the ion transmission and the ion detection regions. Oleuropein and hydroxytyrosol were selected as the model compounds. Overall, applying this methodology the sensitivity was increased more than 24%, the resolution more than 6.5%, whereas the elapsed scan time was reduced nearly to its half. A high-resolution LTQ Orbitrap Discovery mass spectrometer was used for the determination of the analytes of interest. Thus, oleuropein and hydroxytyrosol were infused via the instruments syringe pump and they were analyzed employing electrospray ionization (ESI) in the negative high-resolution full-scan ion mode. The parameters of the three main regions of the LTQ-orbitrap were independently optimized in terms of maximum sensitivity. In this context, factorial design, response surface model and Plackett-Burman experiments were performed and analysis of variance was carried out to evaluate the validity of the statistical model and to determine the most significant parameters for signal intensity. The optimum MS conditions for each analyte were summarized and the method optimum condition was achieved by maximizing the desirability function. Our observation showed good agreement between the predicted optimum response and the responses collected at the predicted optimum conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Inferring neural activity from BOLD signals through nonlinear optimization.
Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E
2007-11-01
The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.
Flight instrument and telemetry response and its inversion
NASA Technical Reports Server (NTRS)
Weinberger, M. R.
1971-01-01
Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.
Optimization of CW Fiber Lasers With Strong Nonlinear Cavity Dynamics
NASA Astrophysics Data System (ADS)
Shtyrina, O. V.; Efremov, S. A.; Yarutkina, I. A.; Skidin, A. S.; Fedoruk, M. P.
2018-04-01
In present work the equation for the saturated gain is derived from one-level gain equations describing the energy evolution inside the laser cavity. It is shown how to derive the parameters of the mathematical model from the experimental results. The numerically-estimated energy and spectrum of the signal are in good agreement with the experiment. Also, the optimization of the output energy is performed for a given set of model parameters.
Parallel optimization of signal detection in active magnetospheric signal injection experiments
NASA Astrophysics Data System (ADS)
Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor
2018-05-01
Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Analysis on design and optimization of dispersion-managed communication systems
NASA Astrophysics Data System (ADS)
El-Aasser, Mostafa A.; Dua, Puneit; Dutta, Niloy K.
2002-07-01
The variational method is a useful tool that can be used for design and optimization of dispersion-managed communication systems. Using this powerful tool, we evaluate the characteristics of a carrier signal for certain system parameters and describe several features of a dispersion-managed soliton.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
AC signal characterization for optimization of a CMOS single-electron pump
NASA Astrophysics Data System (ADS)
Murray, Roy; Perron, Justin K.; Stewart, M. D., Jr.; Zimmerman, Neil M.
2018-02-01
Pumping single electrons at a set rate is being widely pursued as an electrical current standard. Semiconductor charge pumps have been pursued in a variety of modes, including single gate ratchet, a variety of 2-gate ratchet pumps, and 2-gate turnstiles. Whether pumping with one or two AC signals, lower error rates can result from better knowledge of the properties of the AC signal at the device. In this work, we operated a CMOS single-electron pump with a 2-gate ratchet style measurement and used the results to characterize and optimize our two AC signals. Fitting this data at various frequencies revealed both a difference in signal path length and attenuation between our two AC lines. Using this data, we corrected for the difference in signal path length and attenuation by applying an offset in both the phase and the amplitude at the signal generator. Operating the device as a turnstile while using the optimized parameters determined from the 2-gate ratchet measurement led to much flatter, more robust charge pumping plateaus. This method was useful in tuning our device up for optimal charge pumping, and may prove useful to the semiconductor quantum dot community to determine signal attenuation and path differences at the device.
NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
NASA Astrophysics Data System (ADS)
Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den
2016-08-01
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520
2010-03-15
The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Qiu, Xiaojun; Burnett, Ian S.; Guo, Yecai
2015-08-01
Hybrid feedforward and feedback structures are useful for active noise control (ANC) applications where the noise can only be partially obtained with reference sensors. The traditional method uses the secondary signals of both the feedforward and feedback structures to synthesize a reference signal for the feedback structure in the hybrid structure. However, this approach introduces coupling between the feedforward and feedback structures and parameter changes in one structure affect the other during adaptation such that the feedforward and feedback structures must be optimized simultaneously in practical ANC system design. Two methods are investigated in this paper to remove such coupling effects. One is a simplified method, which uses the error signal directly as the reference signal in the feedback structure, and the second method generates the reference signal for the feedback structure by using only the secondary signal from the feedback structure and utilizes the generated reference signal as the error signal of the feedforward structure. Because the two decoupling methods can optimize the feedforward and feedback structures separately, they provide more flexibility in the design and optimization of the adaptive filters in practical ANC applications.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
NASA Astrophysics Data System (ADS)
Sládková, Lucia; Prochazka, David; Pořízka, Pavel; Škarková, Pavlína; Remešová, Michaela; Hrdlička, Aleš; Novotný, Karel; Čelko, Ladislav; Kaiser, Jozef
2017-01-01
In this work we studied the effect of vacuum (low pressure) conditions on the behavior of laser-induced plasma (LIP) created on a sample surface covered with silver nanoparticles (Ag-NPs), i.e. Nanoparticles-Enhanced Laser-Induced Breakdown Spectroscopy (NELIBS) experiment in a vacuum. The focus was put on the step by step optimization of the measurement parameters, such as energy of the laser pulse, temporally resolved detection, ambient pressure, and different content of Ag-NPs applied on the sample surface. The measurement parameters were optimized in order to achieve the greatest enhancement represented as the signal-to-noise ratio (SNR) of NELIBS signal to the SNR of LIBS signal. The presence of NPs involved in the ablation process enhances LIP intensity; hence the improvement in the analytical sensitivity was yielded. A leaded brass standard was analyzed with the emphasis on the signal enhancement of Pb traces. We gained enhancement by a factor of four. Although the low pressure had no significant influence on the LIP signal enhancement compared to that under ambient conditions, the SNR values were noticeably improved with the implementation of the NPs.
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-07-01
Condition monitoring and fault diagnosis of rolling element bearings are significant to guarantee the reliability and functionality of a mechanical system, production efficiency, and plant safety. However, this is almost invariably a formidable challenge because the fault features are often buried by strong background noises and other unstable interference components. To satisfactorily extract the bearing fault features, a whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time-frequency atom dictionary is proposed in this paper. Firstly, a combined time-frequency atom dictionary whose atom is a combination of Fourier dictionary atom and impact time-frequency dictionary atom is designed according to the properties of bearing fault vibration signal. Furthermore, to improve the efficiency and accuracy of signal sparse representation, the WOA is introduced into the OMP algorithm to optimize the atom parameters for best approximating the original signal with the dictionary atoms. The proposed method is validated through analyzing the bearing fault simulation signal and the real vibration signals collected from an experimental bearing and a wheelset bearing of high-speed trains. The comparisons with the respect to the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří
2014-10-01
Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Yamashita, Shozo; Yokoyama, Kunihiko; Onoguchi, Masahisa; Yamamoto, Haruki; Hiko, Shigeaki; Horita, Akihiro; Nakajima, Kenichi
2014-01-01
Deep-inspiration breath-hold (DIBH) PET/CT with short-time acquisition and respiratory-gated (RG) PET/CT are performed for pulmonary lesions to reduce the respiratory motion artifacts, and to obtain more accurate standardized uptake value (SUV). DIBH PET/CT demonstrates significant advantages in terms of rapid examination, good quality of CT images and low radiation exposure. On the other hand, the image quality of DIBH PET is generally inferior to that of RG PET because of short-time acquisition resulting in poor signal-to-noise ratio. In this study, RG PET has been regarded as a gold standard, and its detectability between DIBH and RG PET studies was compared using each of the most optimal reconstruction parameters. In the phantom study, the most optimal reconstruction parameters for DIBH and RG PET were determined. In the clinical study, 19 cases were examined using each of the most optimal reconstruction parameters. In the phantom study, the most optimal reconstruction parameters for DIBH and RG PET were different. Reconstruction parameters of DIBH PET could be obtained by reducing the number of subsets for those of RG PET in the state of fixing the number of iterations. In the clinical study, high correlation in the maximum SUV was observed between DIBH and RG PET studies. The clinical result was consistent with that of the phantom study surrounded by air since most of the lesions were located in the low pulmonary radioactivity. DIBH PET/CT may be the most practical method which can be the first choice to reduce respiratory motion artifacts if the detectability of DIBH PET is equivalent with that of RG PET. Although DIBH PET may have limitations in suboptimal signal-to-noise ratio, most of the lesions surrounded by low background radioactivity could provide nearly equivalent image quality between DIBH and RG PET studies when each of the most optimal reconstruction parameters was used.
Theoretical analysis of chirp excitation of contrast agents
NASA Astrophysics Data System (ADS)
Barlow, Euan; Mulholland, Anthony J.; Nordon, Alison; Gachagan, Anthony
2010-01-01
Analytic expressions are found for the amplitude of the first and second harmonics of the Ultrasound Contrast Agent's (UCA's) dynamics when excited by a chirp. The dependency of the second harmonic amplitude on the system parameters, the UCA shell parameters, and the insonifying signal parameters is then investigated. It is shown that optimal parameter values exist which give rise to a clear increase in the second harmonic component of the UCA's motion.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
NASA Astrophysics Data System (ADS)
Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling
2017-09-01
The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.
Electrocardiographic signals and swarm-based support vector machine for hypoglycemia detection.
Nuryani, Nuryani; Ling, Steve S H; Nguyen, H T
2012-04-01
Cardiac arrhythmia relating to hypoglycemia is suggested as a cause of death in diabetic patients. This article introduces electrocardiographic (ECG) parameters for artificially induced hypoglycemia detection. In addition, a hybrid technique of swarm-based support vector machine (SVM) is introduced for hypoglycemia detection using the ECG parameters as inputs. In this technique, a particle swarm optimization (PSO) is proposed to optimize the SVM to detect hypoglycemia. In an experiment using medical data of patients with Type 1 diabetes, the introduced ECG parameters show significant contributions to the performance of the hypoglycemia detection and the proposed detection technique performs well in terms of sensitivity and specificity.
NASA Astrophysics Data System (ADS)
López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio
2017-12-01
Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.
Optimal galaxy survey for detecting the dipole in the cross-correlation with 21 cm Intensity Mapping
NASA Astrophysics Data System (ADS)
Lepori, Francesca; Di Dio, Enea; Villa, Eleonora; Viel, Matteo
2018-05-01
We investigate the future perspectives of the detection of the relativistic dipole by cross-correlating the 21 cm emission in Intensity Mapping (IM) and galaxy surveys at low redshift. We model the neutral hydrogen (HI) and the galaxy population by means of the halo model to relate the parameters that affect the dipole signal such as the biases of the two tracers and the Poissonian noise. We investigate the behavior of the signal-to-noise as a function of the galaxy and magnification biases, for two fixed models of the neutral hydrogen. In both cases we found that the signal-to-noise does not grow by increasing the difference between the biases of the two tracers, due to the larger shot-noise yields by highly biased tracers. We also study and provide an optimal luminosity-threshold galaxy catalogue to enhance the signal-to-noise ratio of the relativistic dipole. Interestingly, we show that the maximum magnitude provided by the survey does not lead to the maximum signal-to-noise for detecting relativistic effects and we predict the optimal value for the limiting magnitude. Our work suggests that an optimal analysis could increase the signal-to-noise ratio up to a factor five compared to a standard one.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
On the pilot's behavior of detecting a system parameter change
NASA Technical Reports Server (NTRS)
Morizumi, N.; Kimura, H.
1986-01-01
The reaction of a human pilot, engaged in compensatory control, to a sudden change in the controlled element's characteristics is described. Taking the case where the change manifests itself as a variance change of the monitored signal, it is shown that the detection time, defined to be the time elapsed until the pilot detects the change, is related to the monitored signal and its derivative. Then, the detection behavior is modeled by an optimal controller, an optimal estimator, and a variance-ratio test mechanism that is performed for the monitored signal and its derivative. Results of a digital simulation show that the pilot's detection behavior can be well represented by the model proposed here.
Experimental optimization of directed field ionization
NASA Astrophysics Data System (ADS)
Liu, Zhimin Cheryl; Gregoric, Vincent C.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
The state distribution of an ensemble of Rydberg atoms is commonly measured using selective field ionization. The resulting time resolved ionization signal from a single energy eigenstate tends to spread out due to the multiple avoided Stark level crossings atoms must traverse on the way to ionization. The shape of the ionization signal can be modified by adding a perturbation field to the main field ramp. Here, we present experimental results of the manipulation of the ionization signal using a genetic algorithm. We address how both the genetic algorithm and the experimental parameters were adjusted to achieve an optimized result. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.
Extended Kalman smoother with differential evolution technique for denoising of ECG signal.
Panigrahy, D; Sahu, P K
2016-09-01
Electrocardiogram (ECG) signal gives a lot of information on the physiology of heart. In reality, noise from various sources interfere with the ECG signal. To get the correct information on physiology of the heart, noise cancellation of the ECG signal is required. In this paper, the effectiveness of extended Kalman smoother (EKS) with the differential evolution (DE) technique for noise cancellation of the ECG signal is investigated. DE is used as an automatic parameter selection method for the selection of ten optimized components of the ECG signal, and those are used to create the ECG signal according to the real ECG signal. These parameters are used by the EKS for the development of the state equation and also for initialization of the parameters of EKS. EKS framework is used for denoising the ECG signal from the single channel. The effectiveness of proposed noise cancellation technique has been evaluated by adding white, colored Gaussian noise and real muscle artifact noise at different SNR to some visually clean ECG signals from the MIT-BIH arrhythmia database. The proposed noise cancellation technique of ECG signal shows better signal to noise ratio (SNR) improvement, lesser mean square error (MSE) and percent of distortion (PRD) compared to other well-known methods.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
ERIC Educational Resources Information Center
Delaney, Michael F.
1984-01-01
This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…
Chaos-based wireless communication resisting multipath effects.
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
Chaos-based wireless communication resisting multipath effects
NASA Astrophysics Data System (ADS)
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
An algorithm for control system design via parameter optimization. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sinha, P. K.
1972-01-01
An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.
A theoretical investigation of chirp insonification of ultrasound contrast agents.
Barlow, Euan; Mulholland, Anthony J; Gachagan, Anthony; Nordon, Alison
2011-08-01
A theoretical investigation of second harmonic imaging of an Ultrasound Contrast Agent (UCA) under chirp insonification is considered. By solving the UCA's dynamical equation analytically, the effect that the chirp signal parameters and the UCA shell parameters have on the amplitude of the second harmonic frequency are examined. This allows optimal parameter values to be identified which maximise the UCA's second harmonic response. A relationship is found for the chirp parameters which ensures that a signal can be designed to resonate a UCA for a given set of shell parameters. It is also shown that the shell thickness, shell viscosity and shell elasticity parameter should be as small as realistically possible in order to maximise the second harmonic amplitude. Keller-Herring, Second Harmonic, Chirp, Ultrasound Contrast Agent. Copyright © 2011 Elsevier B.V. All rights reserved.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
On Revenue-Optimal Dynamic Auctions for Bidders with Interdependent Values
NASA Astrophysics Data System (ADS)
Constantin, Florin; Parkes, David C.
In a dynamic market, being able to update one's value based on information available to other bidders currently in the market can be critical to having profitable transactions. This is nicely captured by the model of interdependent values (IDV): a bidder's value can explicitly depend on the private information of other bidders. In this paper we present preliminary results about the revenue properties of dynamic auctions for IDV bidders. We adopt a computational approach to design single-item revenue-optimal dynamic auctions with known arrivals and departures but (private) signals that arrive online. In leveraging a characterization of truthful auctions, we present a mixed-integer programming formulation of the design problem. Although a discretization is imposed on bidder signals the solution is a mechanism applicable to continuous signals. The formulation size grows exponentially in the dependence of bidders' values on other bidders' signals. We highlight general properties of revenue-optimal dynamic auctions in a simple parametrized example and study the sensitivity of prices and revenue to model parameters.
Barnes, Samuel R; Ng, Thomas S C; Montagne, Axel; Law, Meng; Zlokovic, Berislav V; Jacobs, Russell E
2016-05-01
To determine optimal parameters for acquisition and processing of dynamic contrast-enhanced MRI (DCE-MRI) to detect small changes in near normal low blood-brain barrier (BBB) permeability. Using a contrast-to-noise ratio metric (K-CNR) for Ktrans precision and accuracy, the effects of kinetic model selection, scan duration, temporal resolution, signal drift, and length of baseline on the estimation of low permeability values was evaluated with simulations. The Patlak model was shown to give the highest K-CNR at low Ktrans . The Ktrans transition point, above which other models yielded superior results, was highly dependent on scan duration and tissue extravascular extracellular volume fraction (ve ). The highest K-CNR for low Ktrans was obtained when Patlak model analysis was combined with long scan times (10-30 min), modest temporal resolution (<60 s/image), and long baseline scans (1-4 min). Signal drift as low as 3% was shown to affect the accuracy of Ktrans estimation with Patlak analysis. DCE acquisition and modeling parameters are interdependent and should be optimized together for the tissue being imaged. Appropriately optimized protocols can detect even the subtlest changes in BBB integrity and may be used to probe the earliest changes in neurodegenerative diseases such as Alzheimer's disease and multiple sclerosis. © 2015 Wiley Periodicals, Inc.
Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan
2015-01-01
Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.
Evolutionary design optimization of traffic signals applied to Quito city.
Armas, Rolando; Aguirre, Hernán; Daolio, Fabio; Tanaka, Kiyoshi
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process.
Evolutionary design optimization of traffic signals applied to Quito city
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process. PMID:29236733
A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)
NASA Astrophysics Data System (ADS)
Li, Minghui; Hayward, Gordon
2017-02-01
The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.
Lee, Hyunyeol; Park, Jaeseok
2013-07-01
Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.
2015-01-01
Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2015-09-15
Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui
2018-05-01
We address the optimal configuration of a partial Mueller matrix polarimeter used to determine the ellipsometric parameters in the presence of additive Gaussian noise and signal-dependent shot noise. The numerical results show that, for the PSG/PSA consisting of a variable retarder and a fixed polarizer, the detection process immune to these two types of noise can be optimally composed by 121.2° retardation with a pair of azimuths ±71.34° and a 144.48° retardation with a pair of azimuths ±31.56° for four Mueller matrix elements measurement. Compared with the existing configurations, the configuration presented in this paper can effectively decrease the measurement variance and thus statistically improve the measurement precision of the ellipsometric parameters.
Radar signal transmission and switching over optical networks
NASA Astrophysics Data System (ADS)
Esmail, Maged A.; Ragheb, Amr; Seleem, Hussein; Fathallah, Habib; Alshebeili, Saleh
2018-03-01
In this paper, we experimentally demonstrate a radar signal distribution over optical networks. The use of fiber enables us to distribute radar signals to distant sites with a low power loss. Moreover, fiber networks can reduce the radar system cost, by sharing precise and expensive radar signal generation and processing equipment. In order to overcome the bandwidth challenges in electrical switches, a semiconductor optical amplifier (SOA) is used as an all-optical device for wavelength conversion to the desired port (or channel) of a wavelength division multiplexing (WDM) network. Moreover, the effect of chromatic dispersion in double sideband (DSB) signals is combated by generating optical single sideband (OSSB) signals. The optimal values of the SOA device parameters required to generate an OSSB with a high sideband suppression ratio (SSR) are determined. We considered various parameters such as injection current, pump power, and probe power. In addition, the effect of signal wavelength conversion and transmission over fiber are studied in terms of signal dynamic range.
Non-linear auto-regressive models for cross-frequency coupling in neural time series
Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre
2017-01-01
We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-01-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manungu Kiveni, Joseph
2012-12-01
This dissertation describes the results of a WIMP search using CDMS II data sets accumulated at the Soudan Underground Laboratory in Minnesota. Results from the original analysis of these data were published in 2009; two events were observed in the signal region with an expected leakage of 0.9 events. Further investigation revealed an issue with the ionization-pulse reconstruction algorithm leading to a software upgrade and a subsequent reanalysis of the data. As part of the reanalysis, I performed an advanced discrimination technique to better distinguish (potential) signal events from backgrounds using a 5-dimensional chi-square method. This dataanalysis technique combines themore » event information recorded for each WIMP-search event to derive a backgrounddiscrimination parameter capable of reducing the expected background to less than one event, while maintaining high efficiency for signal events. Furthermore, optimizing the cut positions of this 5-dimensional chi-square parameter for the 14 viable germanium detectors yields an improved expected sensitivity to WIMP interactions relative to previous CDMS results. This dissertation describes my improved (and optimized) discrimination technique and the results obtained from a blind application to the reanalyzed CDMS II WIMP-search data.« less
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-06-03
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-01-01
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
NASA Astrophysics Data System (ADS)
Wallace, Tess E.; Manavaki, Roido; Graves, Martin J.; Patterson, Andrew J.; Gilbert, Fiona J.
2017-01-01
Physiological fluctuations are expected to be a dominant source of noise in blood oxygenation level-dependent (BOLD) magnetic resonance imaging (MRI) experiments to assess tumour oxygenation and angiogenesis. This work investigates the impact of various physiological noise regressors: retrospective image correction (RETROICOR), heart rate (HR) and respiratory volume per unit time (RVT), on signal variance and the detection of BOLD contrast in the breast in response to a modulated respiratory stimulus. BOLD MRI was performed at 3 T in ten volunteers at rest and during cycles of oxygen and carbogen gas breathing. RETROICOR was optimized using F-tests to determine which cardiac and respiratory phase terms accounted for a significant amount of signal variance. A nested regression analysis was performed to assess the effect of RETROICOR, HR and RVT on the model fit residuals, temporal signal-to-noise ratio, and BOLD activation parameters. The optimized RETROICOR model accounted for the largest amount of signal variance ( Δ R\\text{adj}2 = 3.3 ± 2.1%) and improved the detection of BOLD activation (P = 0.002). Inclusion of HR and RVT regressors explained additional signal variance, but had a negative impact on activation parameter estimation (P < 0.001). Fluctuations in HR and RVT appeared to be correlated with the stimulus and may contribute to apparent BOLD signal reactivity.
Control and optimization system and method for chemical looping processes
Lou, Xinsheng; Joshi, Abhinaya; Lei, Hao
2014-06-24
A control system for optimizing a chemical loop system includes one or more sensors for measuring one or more parameters in a chemical loop. The sensors are disposed on or in a conduit positioned in the chemical loop. The sensors generate one or more data signals representative of an amount of solids in the conduit. The control system includes a data acquisition system in communication with the sensors and a controller in communication with the data acquisition system. The data acquisition system receives the data signals and the controller generates the control signals. The controller is in communication with one or more valves positioned in the chemical loop. The valves are configured to regulate a flow of the solids through the chemical loop.
Control and optimization system and method for chemical looping processes
Lou, Xinsheng; Joshi, Abhinaya; Lei, Hao
2015-02-17
A control system for optimizing a chemical loop system includes one or more sensors for measuring one or more parameters in a chemical loop. The sensors are disposed on or in a conduit positioned in the chemical loop. The sensors generate one or more data signals representative of an amount of solids in the conduit. The control system includes a data acquisition system in communication with the sensors and a controller in communication with the data acquisition system. The data acquisition system receives the data signals and the controller generates the control signals. The controller is in communication with one or more valves positioned in the chemical loop. The valves are configured to regulate a flow of the solids through the chemical loop.
Complexity in congestive heart failure: A time-frequency approach
NASA Astrophysics Data System (ADS)
Banerjee, Santo; Palit, Sanjay K.; Mukherjee, Sayan; Ariffin, MRK; Rondoni, Lamberto
2016-03-01
Reconstruction of phase space is an effective method to quantify the dynamics of a signal or a time series. Various phase space reconstruction techniques have been investigated. However, there are some issues on the optimal reconstructions and the best possible choice of the reconstruction parameters. This research introduces the idea of gradient cross recurrence (GCR) and mean gradient cross recurrence density which shows that reconstructions in time frequency domain preserve more information about the dynamics than the optimal reconstructions in time domain. This analysis is further extended to ECG signals of normal and congestive heart failure patients. By using another newly introduced measure—gradient cross recurrence period density entropy, two classes of aforesaid ECG signals can be classified with a proper threshold. This analysis can be applied to quantifying and distinguishing biomedical and other nonlinear signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, He; Lv, Hongliang; Guo, Hui, E-mail: hguan@stu.xidian.edu.cn
2015-11-21
Impact ionization affects the radio-frequency (RF) behavior of high-electron-mobility transistors (HEMTs), which have narrow-bandgap semiconductor channels, and this necessitates complex parameter extraction procedures for HEMT modeling. In this paper, an enhanced small-signal equivalent circuit model is developed to investigate the impact ionization, and an improved method is presented in detail for direct extraction of intrinsic parameters using two-step measurements in low-frequency and high-frequency regimes. The practicability of the enhanced model and the proposed direct parameter extraction method are verified by comparing the simulated S-parameters with published experimental data from an InAs/AlSb HEMT operating over a wide frequency range. The resultsmore » demonstrate that the enhanced model with optimal intrinsic parameter values that were obtained by the direct extraction approach can effectively characterize the effects of impact ionization on the RF performance of HEMTs.« less
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
NASA Astrophysics Data System (ADS)
Kozioł, Michał
2017-10-01
The article presents a parametric model describing the registered distributions spectrum of optical radiation emitted by electrical discharges generated in the systems: the needle- needle, the needleplate and in the system for surface discharges. Generation of electrical discharges and registration of the emitted radiation was carried out in three different electrical insulating oils: fabric new, operated (used) and operated with air bubbles. For registration of optical spectra in the range of ultraviolet, visible and near infrared a high resolution spectrophotometer was. The proposed mathematical model was developed in a regression procedure using gauss-sigmoid type function. The dependent variable was the intensity of the recorded optical signals. In order to estimate the optimal parameters of the model an evolutionary algorithm was used. The optimization procedure was performed in Matlab environment. For determination of the matching quality of theoretical parameters of the regression function to the empirical data determination coefficient R2 was applied.
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Raghuwanshi, Sanjeev Kumar
2016-06-01
The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.
Limit characteristics of digital optoelectronic processor
NASA Astrophysics Data System (ADS)
Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.
2018-01-01
In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.
NASA Astrophysics Data System (ADS)
Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David
2018-04-01
Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.
Optimization of Shielding- Collimator Parameters for ING-27 Neutron Generator Using MCNP5
NASA Astrophysics Data System (ADS)
Hegazy, Aya Hamdy; Skoy, V. R.; Hossny, K.
2018-04-01
Neutron generators are now used in various fields. They produce only fast neutrons; D-D neutron generator produces 2.45 MeV neutrons and D-T produces 14.1 MeV neutrons. In order to optimize shielding-collimator parameters to achieve higher neutron flux at the investigated sample (The signal) with lower neutron and gamma rays flux at the area of the detectors, design iterations are widely used. This work was applied to ROMASHA setup, TANGRA project, FLNP, Joint Institute for Nuclear Research. The studied parameters were; (1) shielding-collimator material, (2) Distance between the shielding-collimator assembly first plate and center of the neutron beam, and (3) thickness of collimator sheets. MCNP5 was used to simulate ROMASHA setup after it was validated on the experimental results of irradiation of Carbon-12 sample for one hour to detect its 4.44 MeV characteristic gamma line. The ratio between the signal and total neutron flux that enters each detector was calculated and plotted, concluding that the optimum shielding-collimator assembly is Tungsten of 5 cm thickness for each plate, and a distance of 2.3 cm. Also, the ratio between the signal and total gamma rays flux was calculated and plotted for each detector, leading to the previous conclusion but the distance was 1 cm.
Tcherniavski, Iouri; Kahrizi, Mojtaba
2008-11-20
Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.
SNDR enhancement in noisy sinusoidal signals by non-linear processing elements
NASA Astrophysics Data System (ADS)
Martorell, Ferran; McDonnell, Mark D.; Abbott, Derek; Rubio, Antonio
2007-06-01
We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models, the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to maximize the output SNDR.
Nakamura, Manami; Makabe, Takeshi; Tezuka, Hideomi; Miura, Takahiro; Umemura, Takuma; Sugimori, Hiroyuki; Sakata, Motomichi
2013-04-01
The purpose of this study was to optimize scan parameters for evaluation of carotid plaque characteristics by k-space trajectory (radial scan method), using a custom-made carotid plaque phantom. The phantom was composed of simulated sternocleidomastoid muscle and four types of carotid plaque. The effect of chemical shift artifact was compared using T1 weighted images (T1WI) of the phantom obtained with and without fat suppression, and using two types of k-space trajectory (the radial scan method and the Cartesian method). The ratio of signal intensity of simulated sternocleidomastoid muscle to the signal intensity of hematoma, blood (including heparin), lard, and mayonnaise was compared among various repetition times (TR) using T1WI and T2 weighted imaging (T2WI). In terms of chemical shift artifacts, image quality was improved using fat suppression for both the radial scan and Cartesian methods. In terms of signal ratio, the highest values were obtained for the radial scan method with TR of 500 ms for T1WI, and TR of 3000 ms for T2WI. For evaluation of carotid plaque characteristics using the radial scan method, chemical shift artifacts were reduced with fat suppression. Signal ratio was improved by optimizing the TR settings for T1WI and T2WI. These results suggest the potential for using magnetic resonance imaging for detailed evaluation of carotid plaque.
A concept for adaptive performance optimization on commercial transport aircraft
NASA Technical Reports Server (NTRS)
Jackson, Michael R.; Enns, Dale F.
1995-01-01
An adaptive control method is presented for the minimization of drag during flight for transport aircraft. The minimization of drag is achieved by taking advantage of the redundant control capability available in the pitch axis, with the horizontal tail used as the primary surface and symmetric deflection of the ailerons and cruise flaps used as additional controls. The additional control surfaces are excited with sinusoidal signals, while the altitude and velocity loops are closed with guidance and control laws. A model of the throttle response as a function of the additional control surfaces is formulated and the parameters in the model are estimated from the sensor measurements using a least squares estimation method. The estimated model is used to determine the minimum drag positions of the control surfaces. The method is presented for the optimization of one and two additional control surfaces. The adaptive control method is extended to optimize rate of climb with the throttle fixed. Simulations that include realistic disturbances are presented, as well as the results of a Monte Carlo simulation analysis that shows the effects of changing the disturbance environment and the excitation signal parameters.
The Shock Pulse Index and Its Application in the Fault Diagnosis of Rolling Element Bearings
Sun, Peng; Liao, Yuhe; Lin, Jin
2017-01-01
The properties of the time domain parameters of vibration signals have been extensively studied for the fault diagnosis of rolling element bearings (REBs). Parameters like kurtosis and Envelope Harmonic-to-Noise Ratio are the most widely applied in this field and some important progress has been made. However, since only one-sided information is contained in these parameters, problems still exist in practice when the signals collected are of complicated structure and/or contaminated by strong background noises. A new parameter, named Shock Pulse Index (SPI), is proposed in this paper. It integrates the mutual advantages of both the parameters mentioned above and can help effectively identify fault-related impulse components under conditions of interference of strong background noises, unrelated harmonic components and random impulses. The SPI optimizes the parameters of Maximum Correlated Kurtosis Deconvolution (MCKD), which is used to filter the signals under consideration. Finally, the transient information of interest contained in the filtered signal can be highlighted through demodulation with the Teager Energy Operator (TEO). Fault-related impulse components can therefore be extracted accurately. Simulations show the SPI can correctly indicate the fault impulses under the influence of strong background noises, other harmonic components and aperiodic impulse and experiment analyses verify the effectiveness and correctness of the proposed method. PMID:28282883
Influence of signal processing strategy in auditory abilities.
Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari
2013-01-01
The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.
NASA Astrophysics Data System (ADS)
Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar
2017-10-01
This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.
Yudin, V I; Taichenachev, A V; Basalaev, M Yu; Kovalenko, D V
2017-02-06
We theoretically investigate the dynamic regime of coherent population trapping (CPT) in the presence of frequency modulation (FM). We have formulated the criteria for quasi-stationary (adiabatic) and dynamic (non-adiabatic) responses of atomic system driven by this FM. Using the density matrix formalism for Λ system, the error signal is exactly calculated and optimized. It is shown that the optimal FM parameters correspond to the dynamic regime of atomic-field interaction, which significantly differs from conventional description of CPT resonances in the frame of quasi-stationary approach (under small modulation frequency). Obtained theoretical results are in good qualitative agreement with different experiments. Also we have found CPT-analogue of Pound-Driver-Hall regime of frequency stabilization.
Sign epistasis caused by hierarchy within signalling cascades.
Nghe, Philippe; Kogenaru, Manjunatha; Tans, Sander J
2018-04-13
Sign epistasis is a central evolutionary constraint, but its causal factors remain difficult to predict. Here we use the notion of parameterised optima to explain epistasis within a signalling cascade, and test these predictions in Escherichia coli. We show that sign epistasis arises from the benefit of tuning phenotypic parameters of cascade genes with respect to each other, rather than from their complex and incompletely known genetic bases. Specifically, sign epistasis requires only that the optimal phenotypic parameters of one gene depend on the phenotypic parameters of another, independent of other details, such as activating or repressing nature, position within the cascade, intra-genic pleiotropy or genotype. Mutational effects change sign more readily in downstream genes, indicating that optimising downstream genes is more constrained. The findings show that sign epistasis results from the inherent upstream-downstream hierarchy between signalling cascade genes, and can be addressed without exhaustive genotypic mapping.
A simple analytical model for signal amplification by reversible exchange (SABRE) process.
Barskiy, Danila A; Pravdivtsev, Andrey N; Ivanov, Konstantin L; Kovtunov, Kirill V; Koptyug, Igor V
2016-01-07
We demonstrate an analytical model for the description of the signal amplification by reversible exchange (SABRE) process. The model relies on a combined analysis of chemical kinetics and the evolution of the nuclear spin system during the hyperpolarization process. The presented model for the first time provides rationale for deciding which system parameters (i.e. J-couplings, relaxation rates, reaction rate constants) have to be optimized in order to achieve higher signal enhancement for a substrate of interest in SABRE experiments.
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W.
2013-01-01
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully. PMID:24253191
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W
2013-11-18
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.
NASA Astrophysics Data System (ADS)
Zeqiri, F.; Alkan, M.; Kaya, B.; Toros, S.
2018-01-01
In this paper, the effects of cutting parameters on cutting forces and surface roughness based on Taguchi experimental design method are determined. Taguchi L9 orthogonal array is used to investigate the effects of machining parameters. Optimal cutting conditions are determined using the signal/noise (S/N) ratio which is calculated by average surface roughness and cutting force. Using results of analysis, effects of parameters on both average surface roughness and cutting forces are calculated on Minitab 17 using ANOVA method. The material that was investigated is Inconel 625 steel for two cases with heat treatment and without heat treatment. The predicted and calculated values with measurement are very close to each other. Confirmation test of results showed that the Taguchi method was very successful in the optimization of machining parameters for maximum surface roughness and cutting forces in the CNC turning process.
Parameter optimization of flux-aided backing-submerged arc welding by using Taguchi method
NASA Astrophysics Data System (ADS)
Pu, Juan; Yu, Shengfu; Li, Yuanyuan
2017-07-01
Flux-aided backing-submerged arc welding has been conducted on D36 steel with thickness of 20 mm. The effects of processing parameters such as welding current, voltage, welding speed and groove angle on welding quality were investigated by Taguchi method. The optimal welding parameters were predicted and the individual importance of each parameter on welding quality was evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results. The importance order of the welding parameters for the welding quality of weld bead was: welding current > welding speed > groove angle > welding voltage. The welding quality of weld bead increased gradually with increasing welding current and welding speed and decreasing groove angle. The optimum values of the welding current, welding speed, groove angle and welding voltage were found to be 1050 A, 27 cm/min, 40∘ and 34 V, respectively.
Systematic parameter estimation in data-rich environments for cell signalling dynamics
Nim, Tri Hieu; Luo, Le; Clément, Marie-Véronique; White, Jacob K.; Tucker-Kellogg, Lisa
2013-01-01
Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter estimation methods that minimize the disagreement between simulated and observed concentrations, the SPEDRE method fits spline curves through observed concentration points, estimates derivatives and then matches the derivatives to the production and consumption of each species. This reformulation of the problem permits an extreme decomposition of the high-dimensional optimization into a product of low-dimensional factors, each factor enforcing the equality of one ODE at one time slice. Coarsely discretized solutions to the factors can be computed systematically. Then the discrete solutions are combined using loopy belief propagation, and refined using local optimization. SPEDRE has unique asymptotic behaviour with runtime polynomial in the number of molecules and timepoints, but exponential in the degree of the biochemical network. SPEDRE performance is comparatively evaluated on a novel model of Akt activation dynamics including redox-mediated inactivation of PTEN (phosphatase and tensin homologue). Availability and implementation: Web service, software and supplementary information are available at www.LtkLab.org/SPEDRE Supplementary information: Supplementary data are available at Bioinformatics online. Contact: LisaTK@nus.edu.sg PMID:23426255
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
Panigrahy, D; Sahu, P K
2017-03-01
This paper proposes a five-stage based methodology to extract the fetal electrocardiogram (FECG) from the single channel abdominal ECG using differential evolution (DE) algorithm, extended Kalman smoother (EKS) and adaptive neuro fuzzy inference system (ANFIS) framework. The heart rate of the fetus can easily be detected after estimation of the fetal ECG signal. The abdominal ECG signal contains fetal ECG signal, maternal ECG component, and noise. To estimate the fetal ECG signal from the abdominal ECG signal, removal of the noise and the maternal ECG component presented in it is necessary. The pre-processing stage is used to remove the noise from the abdominal ECG signal. The EKS framework is used to estimate the maternal ECG signal from the abdominal ECG signal. The optimized parameters of the maternal ECG components are required to develop the state and measurement equation of the EKS framework. These optimized maternal ECG parameters are selected by the differential evolution algorithm. The relationship between the maternal ECG signal and the available maternal ECG component in the abdominal ECG signal is nonlinear. To estimate the actual maternal ECG component present in the abdominal ECG signal and also to recognize this nonlinear relationship the ANFIS is used. Inputs to the ANFIS framework are the output of EKS and the pre-processed abdominal ECG signal. The fetal ECG signal is computed by subtracting the output of ANFIS from the pre-processed abdominal ECG signal. Non-invasive fetal ECG database and set A of 2013 physionet/computing in cardiology challenge database (PCDB) are used for validation of the proposed methodology. The proposed methodology shows a sensitivity of 94.21%, accuracy of 90.66%, and positive predictive value of 96.05% from the non-invasive fetal ECG database. The proposed methodology also shows a sensitivity of 91.47%, accuracy of 84.89%, and positive predictive value of 92.18% from the set A of PCDB.
NASA Astrophysics Data System (ADS)
Królak, Andrzej; Trzaskoma, Pawel
1996-05-01
Application of wavelet analysis to the estimation of parameters of the broad-band gravitational-wave signal emitted by a binary system is investigated. A method of instantaneous frequency extraction first proposed in this context by Innocent and Vinet is used. The gravitational-wave signal from a binary is investigated from the point of view of signal analysis theory and it is shown that such a signal is characterized by a large time - bandwidth product. This property enables the extraction of frequency modulation from the wavelet transform of the signal. The wavelet transform of the chirp signal from a binary is calculated analytically. Numerical simulations with the noisy chirp signal are performed. The gravitational-wave signal from a binary is taken in the quadrupole approximation and it is buried in noise corresponding to three different values of the signal-to-noise ratio and the wavelet method to extract the frequency modulation of the signal is applied. Then, from the frequency modulation, the chirp mass parameter of the binary is estimated. It is found that the chirp mass can be estimated to a good accuracy, typically of the order of (20/0264-9381/13/5/006/img5% where 0264-9381/13/5/006/img6 is the optimal signal-to-noise ratio. It is also shown that the post-Newtonian effects in the gravitational wave signal from a binary can be discriminated to a satisfactory accuracy.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Apparatus and methods for manipulation and optimization of biological systems
NASA Technical Reports Server (NTRS)
Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)
2012-01-01
The invention provides systems and methods for manipulating, e.g., optimizing and controlling, biological systems, e.g., for eliciting a more desired biological response of biological sample, such as a tissue, organ, and/or a cell. In one aspect, systems and methods of the invention operate by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system, e.g., a bioreactor. In alternative aspects, systems include a device for sustaining cells or tissue samples, one or more actuators for stimulating the samples via biochemical, electromagnetic, thermal, mechanical, and/or optical stimulation, one or more sensors for measuring a biological response signal of the samples resulting from the stimulation of the sample. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The compositions and methods of the invention can be used, e.g., to for systems optimization of any biological manufacturing or experimental system, e.g., bioreactors for proteins, e.g., therapeutic proteins, polypeptides or peptides for vaccines, and the like, small molecules (e.g., antibiotics), polysaccharides, lipids, and the like. Another use of the apparatus and methods includes combination drug therapy, e.g. optimal drug cocktail, directed cell proliferations and differentiations, e.g. in tissue engineering, e.g. neural progenitor cells differentiation, and discovery of key parameters in complex biological systems.
Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation
2016-08-12
error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we
Reference tissue quantification of DCE-MRI data without a contrast agent calibration
NASA Astrophysics Data System (ADS)
Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.
2007-02-01
The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.
How quantitative measures unravel design principles in multi-stage phosphorylation cascades.
Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf
2008-09-07
We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.
Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jung Heon; Wu, Kesheng; Simon, Horst D.
2014-03-01
VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by themore » false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.« less
Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui
2018-06-06
A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.
NASA Astrophysics Data System (ADS)
Zhang, George Z.; Myers, Kyle J.; Park, Subok
2013-03-01
Digital breast tomosynthesis (DBT) has shown promise for improving the detection of breast cancer, but it has not yet been fully optimized due to a large space of system parameters to explore. A task-based statistical approach1 is a rigorous method for evaluating and optimizing this promising imaging technique with the use of optimal observers such as the Hotelling observer (HO). However, the high data dimensionality found in DBT has been the bottleneck for the use of a task-based approach in DBT evaluation. To reduce data dimensionality while extracting salient information for performing a given task, efficient channels have to be used for the HO. In the past few years, 2D Laguerre-Gauss (LG) channels, which are a complete basis for stationary backgrounds and rotationally symmetric signals, have been utilized for DBT evaluation2, 3 . But since background and signal statistics from DBT data are neither stationary nor rotationally symmetric, LG channels may not be efficient in providing reliable performance trends as a function of system parameters. Recently, partial least squares (PLS) has been shown to generate efficient channels for the Hotelling observer in detection tasks involving random backgrounds and signals.4 In this study, we investigate the use of PLS as a method for extracting salient information from DBT in order to better evaluate such systems.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Apparatus and Methods for Manipulation and Optimization of Biological Systems
NASA Technical Reports Server (NTRS)
Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)
2014-01-01
The invention provides systems and methods for manipulating biological systems, for example to elicit a more desired biological response from a biological sample, such as a tissue, organ, and/or a cell. In one aspect, the invention operates by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The invention can be used, e.g., to optimize any biological system, e.g., bioreactors for proteins, and the like, small molecules, polysaccharides, lipids, and the like. Another use of the apparatus and methods includes is for the discovery of key parameters in complex biological systems.
Kimura, Atsuomi; Narazaki, Michiko; Kanazawa, Yoko; Fujiwara, Hideaki
2004-07-01
The tissue distribution of perfluorooctanoic acid (PFOA), which is known to show unique biological responses, has been visualized in female mice by (19)F magnetic resonance imaging (MRI) incorporated with the recent advances in microimaging technique. The chemical shift selected fast spin-echo method was applied to acquire in vivo (19)F MR images of PFOA. The in vivo T(1) and T(2) relaxation times of PFOA were proven to be extremely short, which were 140 (+/- 20) ms and 6.3 (+/- 2.2) ms, respectively. To acquire the in vivo (19)F MR images of PFOA, it was necessary to optimize the parameters of signal selection and echo train length. The chemical shift selection was effectively performed by using the (19)F NMR signal of CF(3) group of PFOA without the signal overlapping because the chemical shift difference between the CF(3) and neighbor signals reaches to 14 kHz. The most optimal echo train length to obtain (19)F images efficiently was determined so that the maximum echo time (TE) value in the fast spin-echo sequence was comparable to the in vivo T(2) value. By optimizing these parameters, the in vivo (19)F MR image of PFOA was enabled to obtain efficiently in 12 minutes. As a result, the time course of the accumulation of PFOA into the mouse liver was clearly pursued in the (19)F MR images. Thus, it was concluded that the (19)F MRI becomes the effective method toward the future pharmacological and toxicological studies of perfluorocarboxilic acids.
The DCU: the detector control unit for SPICA-SAFARI
NASA Astrophysics Data System (ADS)
Clénet, Antoine; Ravera, Laurent; Bertrand, Bernard; den Hartog, Roland H.; Jackson, Brian D.; van Leeuven, Bert-Joost; van Loon, Dennis; Parot, Yann; Pointecouteau, Etienne; Sournac, Anthony
2014-08-01
IRAP is developing the warm electronic, so called Detector Control Unit" (DCU), in charge of the readout of the SPICA-SAFARI's TES type detectors. The architecture of the electronics used to readout the 3 500 sensors of the 3 focal plane arrays is based on the frequency domain multiplexing technique (FDM). In each of the 24 detection channels the data of up to 160 pixels are multiplexed in frequency domain between 1 and 3:3 MHz. The DCU provides the AC signals to voltage-bias the detectors; it demodulates the detectors data which are readout in the cold by a SQUID; and it computes a feedback signal for the SQUID to linearize the detection chain in order to optimize its dynamic range. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several µs) and with fast signals (i.e. frequency carriers at 3:3 MHz). This digital signal processing is complex and has to be done at the same time for the 3 500 pixels. It thus requires an optimisation of the power consumption. We took the advantage of the relatively reduced science signal bandwidth (i.e. 20 - 40 Hz) to decouple the signal sampling frequency (10 MHz) and the data processing rate. Thanks to this method we managed to reduce the total number of operations per second and thus the power consumption of the digital processing circuit by a factor of 10. Moreover we used time multiplexing techniques to share the resources of the circuit (e.g. a single BBFB module processes 32 pixels). The current version of the firmware is under validation in a Xilinx Virtex 5 FPGA, the final version will be developed in a space qualified digital ASIC. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed the operation of the detection and readout chains requires to properly define more than 17 500 parameters (about 5 parameters per pixel). Thus it is mandatory to work out an automatic procedure to set up these optimal values. We defined a fast algorithm which characterizes the phase correction to be applied by the BBFB firmware and the pixel resonance frequencies. We also defined a technique to define the AC-carrier initial phases in such a way that the amplitude of their sum is minimized (for a better use of the DAC dynamic range).
Inverse analysis of water profile in starch by non-contact photopyroelectric method
NASA Astrophysics Data System (ADS)
Frandas, A.; Duvaut, T.; Paris, D.
2000-07-01
The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.
Optimization of an optically implemented on-board FDMA demultiplexer
NASA Technical Reports Server (NTRS)
Fargnoli, J.; Riddle, L.
1991-01-01
Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.
Automation of extrusion of porous cable products based on a digital controller
NASA Astrophysics Data System (ADS)
Chostkovskii, B. K.; Mitroshin, V. N.
2017-07-01
This paper presents a new approach to designing an automated system for monitoring and controlling the process of applying porous insulation material on a conductive cable core, which is based on using structurally and parametrically optimized digital controllers of an arbitrary order instead of calculating typical PID controllers using known methods. The digital controller is clocked by signals from the clock length sensor of a measuring wheel, instead of a timer signal, and this provides the robust properties of the system with respect to the changing insulation speed. Digital controller parameters are tuned to provide the operating parameters of the manufactured cable using a simulation model of stochastic extrusion and are minimized by moving a regular simplex in the parameter space of the tuned controller.
NASA Astrophysics Data System (ADS)
Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe
2016-07-01
Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.
Solid state light engines for bioanalytical instruments and biomedical devices
NASA Astrophysics Data System (ADS)
Jaffe, Claudia B.; Jaffe, Steven M.
2010-02-01
Lighting subsystems to drive 21st century bioanalysis and biomedical diagnostics face stringent requirements. Industrywide demands for speed, accuracy and portability mean illumination must be intense as well as spectrally pure, switchable, stable, durable and inexpensive. Ideally a common lighting solution could service these needs for numerous research and clinical applications. While this is a noble objective, the current technology of arc lamps, lasers, LEDs and most recently light pipes have intrinsic spectral and angular traits that make a common solution untenable. Clearly a hybrid solution is required to service the varied needs of the life sciences. Any solution begins with a critical understanding of the instrument architecture and specifications for illumination regarding power, illumination area, illumination and emission wavelengths and numerical aperture. Optimizing signal to noise requires careful optimization of these parameters within the additional constraints of instrument footprint and cost. Often the illumination design process is confined to maximizing signal to noise without the ability to adjust any of the above parameters. A hybrid solution leverages the best of the existing lighting technologies. This paper will review the design process for this highly constrained, but typical optical optimization scenario for numerous bioanalytical instruments and biomedical devices.
Li, Dongsheng; Yang, Wei; Zhang, Wenyao
2017-05-01
Stress corrosion is the major failure type of bridge cable damage. The acoustic emission (AE) technique was applied to monitor the stress corrosion process of steel wires used in bridge cable structures. The damage evolution of stress corrosion in bridge cables was obtained according to the AE characteristic parameter figure. A particle swarm optimization cluster method was developed to determine the relationship between the AE signal and stress corrosion mechanisms. Results indicate that the main AE sources of stress corrosion in bridge cables included four types: passive film breakdown and detachment of the corrosion product, crack initiation, crack extension, and cable fracture. By analyzing different types of clustering data, the mean value of each damage pattern's AE characteristic parameters was determined. Different corrosion damage source AE waveforms and the peak frequency were extracted. AE particle swarm optimization cluster analysis based on principal component analysis was also proposed. This method can completely distinguish the four types of damage sources and simplifies the determination of the evolution process of corrosion damage and broken wire signals. Copyright © 2017. Published by Elsevier B.V.
At what wavelengths should we search for signals from extraterrestrial intelligence?
Townes, C. H.
1983-01-01
It has often been concluded that searches for extraterrestrial intelligence (SETI) should concentrate on attempts to receive signals in the microwave region, the argument being given that communication can occur there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered—in particular, photon detection rather than linear detection alone—and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone may be chosen for optimization by an extraterrestrial civilization. Hence, while partially satisfying arguments may be given about optimal wavelengths for a search for signals from extraterrestrial intelligence, considerable uncertainty must remain. PMID:16593279
At what wavelengths should we search for signals from extraterrestrial intelligence?
Townes, C H
1983-02-01
It has often been concluded that searches for extraterrestrial intelligence (SETI) should concentrate on attempts to receive signals in the microwave region, the argument being given that communication can occur there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered-in particular, photon detection rather than linear detection alone-and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone may be chosen for optimization by an extraterrestrial civilization. Hence, while partially satisfying arguments may be given about optimal wavelengths for a search for signals from extraterrestrial intelligence, considerable uncertainty must remain.
Optimization of wireless Bluetooth sensor systems.
Lonnblad, J; Castano, J; Ekstrom, M; Linden, M; Backlund, Y
2004-01-01
Within this study, three different Bluetooth sensor systems, replacing cables for transmission of biomedical sensor data, have been designed and evaluated. The three sensor architectures are built on 1-, 2- and 3-chip solutions and depending on the monitoring situation and signal character, different solutions are optimal. Essential parameters for all systems have been low physical weight and small size, resistance to interference and interoperability with other technologies as global- or local networks, PC's and mobile phones. Two different biomedical input signals, ECG and PPG (photoplethysmography), have been used to evaluate the three solutions. The study shows that it is possibly to continuously transmit an analogue signal. At low sampling rates and slowly varying parameters, as monitoring the heart rate with PPG, the 1-chip solution is the most suitable, offering low power consumption and thus a longer battery lifetime or a smaller battery, minimizing the weight of the sensor system. On the other hand, when a higher sampling rate is required, as an ECG, the 3-chip architecture, with a FPGA or micro-controller, offers the best solution and performance. Our conclusion is that Bluetooth might be useful in replacing cables of medical monitoring systems.
Artifacts Quantification of Metal Implants in MRI
NASA Astrophysics Data System (ADS)
Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.
2017-11-01
The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.
Popoola, Segun I; Atayero, Aderemi A; Faruk, Nasir
2018-02-01
The behaviour of radio wave signals in a wireless channel depends on the local terrain profile of the propagation environments. In view of this, Received Signal Strength (RSS) of transmitted signals are measured at different points in space for radio network planning and optimization. However, these important data are often not publicly available for wireless channel characterization and propagation model development. In this data article, RSS data of a commercial base station operating at 900 and 1800 MHz were measured along three different routes of Lagos-Badagry Highway, Nigeria. In addition, local terrain profile data of the study area (terrain elevation, clutter height, altitude, and the distance of the mobile station from the base station) are extracted from Digital Terrain Map (DTM) to account for the unique environmental features. Statistical analyses and probability distributions of the RSS data are presented in tables and graphs. Furthermore, the degree of correlations (and the corresponding significance) between the RSS and the local terrain parameters were computed and analyzed for proper interpretations. The data provided in this article will help radio network engineers to: predict signal path loss; estimate radio coverage; efficiently reuse limited frequencies; avoid interferences; optimize handover; and adjust transmitted power level.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Application of multi response optimization with grey relational analysis and fuzzy logic method
NASA Astrophysics Data System (ADS)
Winarni, Sri; Wahyu Indratno, Sapto
2018-01-01
Multi-response optimization is an optimization process by considering multiple responses simultaneously. The purpose of this research is to get the optimum point on multi-response optimization process using grey relational analysis and fuzzy logic method. The optimum point is determined from the Fuzzy-GRG (Grey Relational Grade) variable which is the conversion of the Signal to Noise Ratio of the responses involved. The case study used in this research are case optimization of electrical process parameters in electrical disharge machining. It was found that the combination of treatments resulting to optimum MRR and SR was a 70 V gap voltage factor, peak current 9 A and duty factor 0.8.
Using Differential Evolution to Optimize Learning from Signals and Enhance Network Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmer, Paul K; Temple, Michael A; Buckner, Mark A
2011-01-01
Computer and communication network attacks are commonly orchestrated through Wireless Access Points (WAPs). This paper summarizes proof-of-concept research activity aimed at developing a physical layer Radio Frequency (RF) air monitoring capability to limit unauthorizedWAP access and mprove network security. This is done using Differential Evolution (DE) to optimize the performance of a Learning from Signals (LFS) classifier implemented with RF Distinct Native Attribute (RF-DNA) fingerprints. Performance of the resultant DE-optimized LFS classifier is demonstrated using 802.11a WiFi devices under the most challenging conditions of intra-manufacturer classification, i.e., using emissions of like-model devices that only differ in serial number. Using identicalmore » classifier input features, performance of the DE-optimized LFS classifier is assessed relative to a Multiple Discriminant Analysis / Maximum Likelihood (MDA/ML) classifier that has been used for previous demonstrations. The comparative assessment is made using both Time Domain (TD) and Spectral Domain (SD) fingerprint features. For all combinations of classifier type, feature type, and signal-to-noise ratio considered, results show that the DEoptimized LFS classifier with TD features is uperior and provides up to 20% improvement in classification accuracy with proper selection of DE parameters.« less
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.
Subasi, Abdulhamit
2013-06-01
Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders. Copyright © 2013 Elsevier Ltd. All rights reserved.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Desikan, Radhika
2016-01-01
Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal. Here, we study the classic model of weakly activated cascades and obtain analytical solutions for a variety of inputs. We show that in the special but important case of optimal gain cascades (i.e. when the deactivation rates are identical) the downstream output of the cascade can be represented exactly as a lumped nonlinear module containing an incomplete gamma function with real parameters that depend on the rates and length of the cascade, as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also show that cascades can be rearranged so that blocks with similar rates can be lumped and represented through our nonlinear modules. Our results can be used both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we show how the obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction. PMID:27581482
Cellular Signaling Networks Function as Generalized Wiener-Kolmogorov Filters to Suppress Noise
NASA Astrophysics Data System (ADS)
Hinczewski, Michael; Thirumalai, D.
2014-10-01
Cellular signaling involves the transmission of environmental information through cascades of stochastic biochemical reactions, inevitably introducing noise that compromises signal fidelity. Each stage of the cascade often takes the form of a kinase-phosphatase push-pull network, a basic unit of signaling pathways whose malfunction is linked with a host of cancers. We show that this ubiquitous enzymatic network motif effectively behaves as a Wiener-Kolmogorov optimal noise filter. Using concepts from umbral calculus, we generalize the linear Wiener-Kolmogorov theory, originally introduced in the context of communication and control engineering, to take nonlinear signal transduction and discrete molecule populations into account. This allows us to derive rigorous constraints for efficient noise reduction in this biochemical system. Our mathematical formalism yields bounds on filter performance in cases important to cellular function—such as ultrasensitive response to stimuli. We highlight features of the system relevant for optimizing filter efficiency, encoded in a single, measurable, dimensionless parameter. Our theory, which describes noise control in a large class of signal transduction networks, is also useful both for the design of synthetic biochemical signaling pathways and the manipulation of pathways through experimental probes such as oscillatory input.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Jain, S C; Miller, J R
1976-04-01
A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.
Cumulant expansions for measuring water exchange using diffusion MRI
NASA Astrophysics Data System (ADS)
Ning, Lipeng; Nilsson, Markus; Lasič, Samo; Westin, Carl-Fredrik; Rathi, Yogesh
2018-02-01
The rate of water exchange across cell membranes is a parameter of biological interest and can be measured by diffusion magnetic resonance imaging (dMRI). In this work, we investigate a stochastic model for the diffusion-and-exchange of water molecules. This model provides a general solution for the temporal evolution of dMRI signal using any type of gradient waveform, thereby generalizing the signal expressions for the Kärger model. Moreover, we also derive a general nth order cumulant expansion of the dMRI signal accounting for water exchange, which has not been explored in earlier studies. Based on this analytical expression, we compute the cumulant expansion for dMRI signals for the special case of single diffusion encoding (SDE) and double diffusion encoding (DDE) sequences. Our results provide a theoretical guideline on optimizing experimental parameters for SDE and DDE sequences, respectively. Moreover, we show that DDE signals are more sensitive to water exchange at short-time scale but provide less attenuation at long-time scale than SDE signals. Our theoretical analysis is also validated using Monte Carlo simulations on synthetic structures.
Optimization of brain PET imaging for a multicentre trial: the French CATI experience.
Habert, Marie-Odile; Marie, Sullivan; Bertin, Hugo; Reynal, Moana; Martini, Jean-Baptiste; Diallo, Mamadou; Kas, Aurélie; Trébossen, Régine
2016-12-01
CATI is a French initiative launched in 2010 to handle the neuroimaging of a large cohort of subjects recruited for an Alzheimer's research program called MEMENTO. This paper presents our test protocol and results obtained for the 22 PET centres (overall 13 different scanners) involved in the MEMENTO cohort. We determined acquisition parameters using phantom experiments prior to patient studies, with the aim of optimizing PET quantitative values to the highest possible per site, while reducing, if possible, variability across centres. Jaszczak's and 3D-Hoffman's phantom measurements were used to assess image spatial resolution (ISR), recovery coefficients (RC) in hot and cold spheres, and signal-to-noise ratio (SNR). For each centre, the optimal reconstruction parameters were chosen as those maximizing ISR and RC without a noticeable decrease in SNR. Point-spread-function (PSF) modelling reconstructions were discarded. The three figures of merit extracted from the images reconstructed with optimized parameters and routine schemes were compared, as were volumes of interest ratios extracted from Hoffman acquisitions. The net effect of the 3D-OSEM reconstruction parameter optimization was investigated on a subset of 18 scanners without PSF modelling reconstruction. Compared to the routine parameters of the 22 PET centres, average RC in the two smallest hot and cold spheres and average ISR remained stable or were improved with the optimized reconstruction, at the expense of slight SNR degradation, while the dispersion of values was reduced. For the subset of scanners without PSF modelling, the mean RC of the smallest hot sphere obtained with the optimized reconstruction was significantly higher than with routine reconstruction. The putamen and caudate-to-white matter ratios measured on 3D-Hoffman acquisitions of all centres were also significantly improved by the optimization, while the variance was reduced. This study provides guidelines for optimizing quantitative results for multicentric PET neuroimaging trials.
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Liu, Ze; Xu, Jing
2016-01-01
Shearers play an important role in fully mechanized coal mining face and accurately identifying their cutting pattern is very helpful for improving the automation level of shearers and ensuring the safety of coal mining. The least squares support vector machine (LSSVM) has been proven to offer strong potential in prediction and classification issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. In this paper, an improved fly optimization algorithm (IFOA) to optimize the parameters of LSSVM was presented and the LSSVM coupled with IFOA (IFOA-LSSVM) was used to identify the shearer cutting pattern. The vibration acceleration signals of five cutting patterns were collected and the special state features were extracted based on the ensemble empirical mode decomposition (EEMD) and the kernel function. Some examples on the IFOA-LSSVM model were further presented and the results were compared with LSSVM, PSO-LSSVM, GA-LSSVM and FOA-LSSVM models in detail. The comparison results indicate that the proposed approach was feasible, efficient and outperformed the others. Finally, an industrial application example at the coal mining face was demonstrated to specify the effect of the proposed system. PMID:26771615
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Boon, K H; Khalil-Hani, M; Malarvili, M B
2018-01-01
This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.
Qiao, Wei; Venayagamoorthy, Ganesh K; Harley, Ronald G
2008-01-01
Wide-area coordinating control is becoming an important issue and a challenging problem in the power industry. This paper proposes a novel optimal wide-area coordinating neurocontrol (WACNC), based on wide-area measurements, for a power system with power system stabilizers, a large wind farm and multiple flexible ac transmission system (FACTS) devices. An optimal wide-area monitor (OWAM), which is a radial basis function neural network (RBFNN), is designed to identify the input-output dynamics of the nonlinear power system. Its parameters are optimized through particle swarm optimization (PSO). Based on the OWAM, the WACNC is then designed by using the dual heuristic programming (DHP) method and RBFNNs, while considering the effect of signal transmission delays. The WACNC operates at a global level to coordinate the actions of local power system controllers. Each local controller communicates with the WACNC, receives remote control signals from the WACNC to enhance its dynamic performance and therefore helps improve system-wide dynamic and transient performance. The proposed control is verified by simulation studies on a multimachine power system.
Characterization of a Raman spectroscopy probe system for intraoperative brain tissue classification
Desroches, Joannie; Jermyn, Michael; Mok, Kelvin; Lemieux-Leduc, Cédric; Mercier, Jeanne; St-Arnaud, Karl; Urmey, Kirk; Guiot, Marie-Christine; Marple, Eric; Petrecca, Kevin; Leblond, Frédéric
2015-01-01
A detailed characterization study is presented of a Raman spectroscopy system designed to maximize the volume of resected cancer tissue in glioma surgery based on in vivo molecular tissue characterization. It consists of a hand-held probe system measuring spectrally resolved inelastically scattered light interacting with tissue, designed and optimized for in vivo measurements. Factors such as linearity of the signal with integration time and laser power, and their impact on signal to noise ratio, are studied leading to optimal data acquisition parameters. The impact of ambient light sources in the operating room is assessed and recommendations made for optimal operating conditions. In vivo Raman spectra of normal brain, cancer and necrotic tissue were measured in 10 patients, demonstrating that real-time inelastic scattering measurements can distinguish necrosis from vital tissue (including tumor and normal brain tissue) with an accuracy of 87%, a sensitivity of 84% and a specificity of 89%. PMID:26203368
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.
2015-03-01
Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.
USDA-ARS?s Scientific Manuscript database
Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
Particle model for optical noisy image recovery via stochastic resonance
NASA Astrophysics Data System (ADS)
Zhang, Yongbin; Liu, Hongjun; Huang, Nan; Wang, Zhaolu; Han, Jing
2017-10-01
We propose a particle model for investigating the optical noisy image recovery via stochastic resonance. The light propagating in nonlinear media is regarded as moving particles, which are used for analyzing the nonlinear coupling of signal and noise. Owing to nonlinearity, a signal seeds a potential to reinforce itself at the expense of noise. The applied electric field, noise intensity, and correlation length are important parameters that influence the recovery effects. The noise-hidden image with the signal-to-noise intensity ratio of 1:30 is successfully restored and an optimal cross-correlation gain of 6.1 is theoretically obtained.
Processing oscillatory signals by incoherent feedforward loops
NASA Astrophysics Data System (ADS)
Zhang, Carolyn; Wu, Feilun; Tsoi, Ryan; Shats, Igor; You, Lingchong
From the timing of amoeba development to the maintenance of stem cell pluripotency,many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression.While networks underlying this signal decoding are diverse,many are built around a common motif, the incoherent feedforward loop (IFFL),where an input simultaneously activates an output and an inhibitor of the output.With appropriate parameters,this motif can generate temporal adaptation,where the system is desensitized to a sustained input.This property serves as the foundation for distinguishing signals with varying temporal profiles.Here,we use quantitative modeling to examine another property of IFFLs,the ability to process oscillatory signals.Our results indicate that the system's ability to translate pulsatile dynamics is limited by two constraints.The kinetics of IFFL components dictate the input range for which the network can decode pulsatile dynamics.In addition,a match between the network parameters and signal characteristics is required for optimal ``counting''.We elucidate one potential mechanism by which information processing occurs in natural networks with implications in the design of synthetic gene circuits for this purpose. This work was partially supported by the National Science Foundation Graduate Research Fellowship (CZ).
Quantum neural network-based EEG filtering for a brain-computer interface.
Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin
2014-02-01
A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.
Use phase signals to promote lifetime extension for Windows PCs.
Hickey, Stewart; Fitzpatrick, Colin; O'Connell, Maurice; Johnson, Michael
2009-04-01
This paper proposes a signaling methodology for personal computers. Signaling may be viewed as an ecodesign strategy that can positively influence the consumer to consumer (C2C) market process. A number of parameters are identified that can provide the basis for signal implementation. These include operating time, operating temperature, operating voltage, power cycle counts, hard disk drive (HDD) self-monitoring, and reporting technology (SMART) attributes and operating system (OS) event information. All these parameters are currently attainable or derivable via embedded technologies in modern desktop systems. A case study detailing a technical implementation of how the development of signals can be achieved in personal computers that incorporate Microsoft Windows operating systems is presented. Collation of lifetime temperature data from a system processor is demonstrated as a possible means of characterizing a usage profile for a desktop system. In addition, event log data is utilized for devising signals indicative of OS quality. The provision of lifetime usage data in the form of intuitive signals indicative of both hardware and software quality can in conjunction with consumer education facilitate an optimal remarketing strategy for used systems. This implementation requires no additional hardware.
Processing Oscillatory Signals by Incoherent Feedforward Loops
Zhang, Carolyn; You, Lingchong
2016-01-01
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175
De Benedictis, Lorenzo; Huck, Christian
2016-12-01
The optimization of near-infrared spectroscopic parameters was realized via design of experiments. With this new approach objectivity can be integrated into conventional, rather subjective approaches. The investigated factors are layer thickness, number of scans and temperature during measurement. Response variables in the full factorial design consisted of absorption intensity, signal-to-noise ratio and reproducibility of the spectra. Optimized factorial combinations have been found to be 0.5mm layer thickness, 64 scans and 25°C ambient temperature for liquid milk measurements. Qualitative analysis of milk indicated a strong correlation of environmental factors, as well as the feeding of cattle with respect to the change in milk composition. This was illustrated with the aid of near-infrared spectroscopy and the previously optimized parameters by detection of altered fatty acids in milk, especially by the fatty acid content (number of carboxylic functions) and the fatty acid length. Copyright © 2016 Elsevier Ltd. All rights reserved.
Instrumentation to Record Evoked Potentials for Closed-Loop Control of Deep Brain Stimulation
Kent, Alexander R.; Grill, Warren M.
2012-01-01
Closed-loop deep brain stimulation (DBS) systems offer promise in relieving the clinical burden of stimulus parameter selection and improving treatment outcomes. In such a system, a feedback signal is used to adjust automatically stimulation parameters and optimize the efficacy of stimulation. We explored the feasibility of recording electrically evoked compound action potentials (ECAPs) during DBS for use as a feedback control signal. A novel instrumentation system was developed to suppress the stimulus artifact and amplify the small magnitude, short latency ECAP response during DBS with clinically relevant parameters. In vitro testing demonstrated the capabilities to increase the gain by a factor of 1,000x over a conventional amplifier without saturation, reduce distortion of mock ECAP signals, and make high fidelity recordings of mock ECAPs at latencies of only 0.5 ms following DBS pulses of 50 to 100 μs duration. Subsequently, the instrumentation was used to make in vivo recordings of ECAPs during thalamic DBS in cats, without contamination by the stimulus artifact. The signal characteristics were similar across three experiments, suggesting common neural activation patterns. The ECAP recordings enabled with this novel instrumentation may provide insight into the type and spatial extent of neural elements activated during DBS, and could serve as feedback control signals for closed-loop systems. PMID:22255894
The effects of parameter variation on MSET models of the Crystal River-3 feedwater flow system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miron, A.
1998-04-01
In this paper we develop further the results reported in Reference 1 to include a systematic study of the effects of varying MSET models and model parameters for the Crystal River-3 (CR) feedwater flow system The study used archived CR process computer files from November 1-December 15, 1993 that were provided by Florida Power Corporation engineers Fairman Bockhorst and Brook Julias. The results support the conclusion that an optimal MSET model, properly trained and deriving its inputs in real-time from no more than 25 of the sensor signals normally provided to a PWR plant process computer, should be able tomore » reliably detect anomalous variations in the feedwater flow venturis of less than 0.1% and in the absence of a venturi sensor signal should be able to generate a virtual signal that will be within 0.1% of the correct value of the missing signal.« less
Buchwald, Peter
2017-06-01
A generalized model of receptor function is proposed that relies on the essential assumptions of the minimal two-state receptor theory (i.e., ligand binding followed by receptor activation), but uses a different parametrization and allows nonlinear response (transduction) for possible signal amplification. For the most general case, three parameters are used: K d , the classic equilibrium dissociation constant to characterize binding affinity; ε , an intrinsic efficacy to characterize the ability of the bound ligand to activate the receptor (ranging from 0 for an antagonist to 1 for a full agonist); and γ , a gain (amplification) parameter to characterize the nonlinearity of postactivation signal transduction (ranging from 1 for no amplification to infinity). The obtained equation, E/Emax=εγLεγ+1-εL+Kd, resembles that of the operational (Black and Leff) or minimal two-state (del Castillo-Katz) models, E/Emax=τLτ+1L+Kd, with εγ playing a role somewhat similar to that of the τ efficacy parameter of those models, but has several advantages. Its parameters are more intuitive as they are conceptually clearly related to the different steps of binding, activation, and signal transduction (amplification), and they are also better suited for optimization by nonlinear regression. It allows fitting of complex data where receptor binding and response are measured separately and the fractional occupancy and response are mismatched. Unlike the previous models, it is a true generalized model as simplified forms can be reproduced with special cases of its parameters. Such simplified forms can be used on their own to characterize partial agonism, competing partial and full agonists, or signal amplification.
Optimization and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite
NASA Astrophysics Data System (ADS)
Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.
2016-09-01
The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. Optimization and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. Taguchi's L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. Optimized process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) method. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of optimized process parameters.
Longitudinal bunch monitoring at the Fermilab Tevatron and Main Injector synchrotrons
Thurman-Keup, R.; Bhat, C.; Blokland, W.; ...
2011-10-17
The measurement of the longitudinal behavior of the accelerated particle beams at Fermilab is crucial to the optimization and control of the beam and the maximizing of the integrated luminosity for the particle physics experiments. Longitudinal measurements in the Tevatron and Main Injector synchrotrons are based on the analysis of signals from resistive wall current monitors. This study describes the signal processing performed by a 2 GHz-bandwidth oscilloscope together with a computer running a LabVIEW program which calculates the longitudinal beam parameters.
Optimisation d'analyses de grenat almandin realisees au microscope electronique a balayage
NASA Astrophysics Data System (ADS)
Larose, Miguel
The electron microprobe (EMP) is considered as the golden standard for the collection of precise and representative chemical composition of minerals in rocks, but data of similar quality should be obtainable with a scanning electron microscope (SEM). This thesis presents an analytical protocol aimed at optimizing operational parameters of an SEM paired with an EDS Si(Li) X-ray detector (JEOL JSM-840A) for the imaging, quantitative chemical analysis and compositional X-ray maps of almandine garnet found in pelitic schists from the Canadian Cordillera. Results are then compared to those obtained for the same samples on a JEOL JXA 8900 EMP. For imaging purposes, the secondary electrons and backscattered electrons signals have been used to obtain topographic and chemical contrast of the samples, respectively. The SEM allows the acquisition of images with higher resolution than the EMP when working at high magnifications. However, for millimetric size minerals requiring very low magnifications, the EMP can usually match the imaging capabilities of an SEM. When optimizing images for both signals, the optimal operational parameters to show similar contrasts are not restricted to a unique combination of values. Optimization of operational parameters for quantitative chemical analysis resulted in analytical data with a similar precision and showing good correlation to that obtained with an EMP. Optimization of operational parameters for compositional X-ray maps aimed at maximizing the collected intensity within a pixel as well as complying with the spatial resolution criterion in order to obtain a qualitative compositional map representative of the chemical variation within the grain. Even though various corrections were needed, such as the shadow effect and the background noise removal, as well as the impossibility to meet the spatial resolution criterion because of the limited pixel density available on the SEM, the compositional X-ray maps show a good correlation with those obtained with the EMP, even for concentrations as low as 0,5%. When paired with a rigorous analytical protocol, the use of an SEM equipped with an EDS Si (Li) X-ray detector allows the collection of qualitative and quantitative results similar to those obtained with an EMP for all three of the applications considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Lu, Z; MacMahon, H
Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less
SPS pilot signal design and power transponder analysis, volume 2, phase 3
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.
1980-01-01
The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.
Experiments on Adaptive Self-Tuning of Seismic Signal Detector Parameters
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Chael, E. P.; Peterson, M. G.; Lawry, B.; Phillips-Alonge, K. E.; Balch, R. S.; Ziegler, A.
2016-12-01
Scientific applications, including underground nuclear test monitoring and microseismic monitoring can benefit enormously from data-driven dynamic algorithms for tuning seismic and infrasound signal detection parameters since continuous streams are producing waveform archives on the order of 1TB per month. Tuning is a challenge because there are a large number of data processing parameters that interact in complex ways, and because the underlying populating of true signal detections is generally unknown. The largely manual process of identifying effective parameters, often performed only over a subset of stations over a short time period, is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. We present improvements to an Adaptive Self-Tuning algorithm for continuously adjusting detection parameters based on consistency with neighboring sensors. Results are shown for 1) data from a very dense network ( 120 stations, 10 km radius) deployed during 2008 on Erebus Volcano, Antarctica, and 2) data from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Performance is assessed in terms of missed detections and false detections relative to human analyst detections, simulated waveforms where ground-truth detections exist and visual inspection.
Clark, Toshimasa J; Wilson, Gregory J; Maki, Jeffrey H
2017-07-01
Contrast-enhanced (CE)-MRA optimization involves interactions of sequence duration, bolus timing, contrast recirculation, and both R 1 relaxivity and R2*-related reduction of signal. Prior data suggest superior image quality with slower gadolinium injection rates than typically used. A computer-based model of CE-MRA was developed, with contrast injection, physiologic, and image acquisition parameters varied over a wide gamut. Gadolinium concentration was derived using Verhoeven's model with recirculation, R 1 and R2* calculated at each time point, and modulation transfer curves used to determine injection rates, resulting in optimal resolution and image contrast for renal and carotid artery CE-MRA. Validation was via a vessel stenosis phantom and example patients who underwent carotid CE-MRA with low effective injection rates. Optimal resolution for renal and carotid CE-MRA is achieved with injection rates between 0.5 to 0.9 mL/s and 0.2 to 0.3 mL/s, respectively, dependent on contrast volume. Optimal image contrast requires slightly faster injection rates. Expected signal-to-noise ratio varies with both contrast volume and cardiac output. Simulated vessel phantom and clinical carotid CE-MRA exams at an effective contrast injection rate of 0.4 to 0.5 mL/s demonstrate increased resolution. Optimal image resolution is achieved at intuitively low, effective injection rates (0.2-0.9 mL/s, dependent on imaging parameters and contrast injection volume). Magn Reson Med 78:357-369, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Robust stochastic resonance: Signal detection and adaptation in impulsive noise
NASA Astrophysics Data System (ADS)
Kosko, Bart; Mitaim, Sanya
2001-11-01
Stochastic resonance (SR) occurs when noise improves a system performance measure such as a spectral signal-to-noise ratio or a cross-correlation measure. All SR studies have assumed that the forcing noise has finite variance. Most have further assumed that the noise is Gaussian. We show that SR still occurs for the more general case of impulsive or infinite-variance noise. The SR effect fades as the noise grows more impulsive. We study this fading effect on the family of symmetric α-stable bell curves that includes the Gaussian bell curve as a special case. These bell curves have thicker tails as the parameter α falls from 2 (the Gaussian case) to 1 (the Cauchy case) to even lower values. Thicker tails create more frequent and more violent noise impulses. The main feedback and feedforward models in the SR literature show this fading SR effect for periodic forcing signals when we plot either the signal-to-noise ratio or a signal correlation measure against the dispersion of the α-stable noise. Linear regression shows that an exponential law γopt(α)=cAα describes this relation between the impulsive index α and the SR-optimal noise dispersion γopt. The results show that SR is robust against noise ``outliers.'' So SR may be more widespread in nature than previously believed. Such robustness also favors the use of SR in engineering systems. We further show that an adaptive system can learn the optimal noise dispersion for two standard SR models (the quartic bistable model and the FitzHugh-Nagumo neuron model) for the signal-to-noise ratio performance measure. This also favors practical applications of SR and suggests that evolution may have tuned the noise-sensitive parameters of biological systems.
Test apparatus to monitor time-domain signals from semiconductor-detector pixel arrays
NASA Astrophysics Data System (ADS)
Haston, Kyle; Barber, H. Bradford; Furenlid, Lars R.; Salçin, Esen; Bora, Vaibhav
2011-10-01
Pixellated semiconductor detectors, such as CdZnTe, CdTe, or TlBr, are used for gamma-ray imaging in medicine and astronomy. Data analysis for these detectors typically estimates the position (x, y, z) and energy (E) of each interacting gamma ray from a set of detector signals {Si} corresponding to completed charge transport on the hit pixel and any of its neighbors that take part in charge sharing, plus the cathode. However, it is clear from an analysis of signal induction, that there are transient signal on all pixel electrodes during the charge transport and, when there is charge trapping, small negative residual signals on all electrodes. If we wish to optimally obtain the event parameters, we should take all these signals into account. We wish to estimate x,y,z and E from the set of all electrode signals, {Si(t)}, including time dependence, using maximum-likelihood techniques[1]. To do this, we need to determine the probability of the electrode signals, given the event parameters {x, y, z, E}, i.e. Pr( {Si(t)} | {x, y, z, E} ). Thus we need to map the detector response of all pixels, {Si(t)}, for a large number of events with known x,y,z and E.In this paper we demonstrate the existence of the transient signals and residual signals and determine their magnitudes. They are typically 50-100 times smaller than the hit-pixel signals. We then describe development of an apparatus to measure the response of a 16-pixel semiconductor detector and show some preliminary results. We also discuss techniques for measuring the event parameters for individual gamma-ray interactions, a requirement for determining Pr( {Si(t)} | {x, y, z, E}).
NASA Astrophysics Data System (ADS)
Paíga, Paula; Silva, Luís M. S.; Delerue-Matos, Cristina
2016-10-01
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 22 factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal.
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
Searches for millisecond pulsations in low-mass X-ray binaries
NASA Technical Reports Server (NTRS)
Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.
1991-01-01
High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
Optimization of the coplanar interdigital capacitive sensor
NASA Astrophysics Data System (ADS)
Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola
2017-02-01
Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.
Zhao, Yimeng; Sun, Liangliang; Zhu, Guijie; Dovichi, Norman J
2016-10-07
We used reversed-phase liquid chromatography to separate the yeast proteome into 23 fractions. These fractions were then analyzed using capillary zone electrophoresis (CZE) coupled to a Q-Exactive HF mass spectrometer using an electrokinetically pumped sheath flow interface. The parameters of the mass spectrometer were first optimized for top-down proteomics using a mixture of seven model proteins; we observed that intact protein mode with a trapping pressure of 0.2 and normalized collision energy of 20% produced the highest intact protein signals and most protein identifications. Then, we applied the optimized parameters for analysis of the fractionated yeast proteome. From this, 580 proteoforms and 180 protein groups were identified via database searching of the MS/MS spectra. This number of proteoform identifications is two times larger than that of previous CZE-MS/MS studies. An additional 3,243 protein species were detected based on the parent ion spectra. Post-translational modifications including N-terminal acetylation, signal peptide removal, and oxidation were identified.
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
Online determination of biophysical parameters of mucous membranes of a human body
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-07-01
We have developed a method for online determination of biophysical parameters of mucous membranes (MMs) of a human body (transport scattering coefficient, scattering anisotropy factor, haemoglobin concentration, degrees of blood oxygenation, average diameter of capillaries with blood) from measurements of spectral and spatial characteristics of diffuse reflection. The method is based on regression relationships between linearly independent components of the measured light signals and the unknown parameters of MMs, obtained by simulation of the radiation transfer in the MM under conditions of its general variability. We have proposed and justified the calibration-free fibre-optic method for determining the concentration of haemoglobin in MMs by measuring the light signals diffusely reflected by the tissue in four spectral regions at two different distances from the illumination spot. We have selected the optimal wavelengths of optical probing for the implementation of the method.
Experimental validation of the tuneable diaphragm effect in modern acoustic stethoscopes.
Nowak, Karolina M; Nowak, Lukasz J
2017-09-01
The force with which the diaphragm chestpiece of a stethoscope is pressed against the body of a patient during an auscultation examination introduces the initial stress and deformation to the diaphragm and the underlying tissues, thus altering the acoustic parameters of the sound transmission path. If the examination is performed by an experienced physician, he will intuitively adjust the amount of the force in order to achieve the optimal sound quality. However, in case of becoming increasingly popular auto-diagnosis and telemedicine auscultation devices with no such feedback mechanisms, the question arises regarding the influence of the possible force mismatch on the parameters of the recorded signal. The present study describes the results of the experimental investigations on the relation between pressure applied to the chestpiece of a stethoscope and parameters of the transmitted bioacoustic signals. The experiments were carried out using various stethoscopes connected to a force measurement system, which allowed to maintain fixed pressure during auscultation examinations. The signals were recorded during examinations of different volunteers, at various auscultation sites. The obtained results reveal strong individual and auscultation-site variability. It is concluded that the underlying tissue deformation is the primary factor that alters the parameters of the recorded signals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking
Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.
2014-01-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438
Muscle synergies may improve optimization prediction of knee contact forces during walking.
Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J
2014-02-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.
NASA Astrophysics Data System (ADS)
Clauss, Günther; Klein, Marco
2010-05-01
In the past years the existence of freak waves has been affirmed by observations, registrations, and severe accidents. One of the famous real world registrations is the so called 'New Year wave,' recorded in the North Sea at the Draupner jacket platform on January 1st, 1995. Since there is only a single point registration available, it is not possible to draw conclusions on the spatial development in front of and behind the point of registration, which is indispensable for a complete understanding of this phenomenon. This paper presents the temporal and spatial development of the New Year Wave generated in a model basin. To simulate the recorded New Year wave in the wave tank, an optimization approach for the experimental generation of wave sequences with predefined characteristics is used. The method is applied to generate scenarios with a single high wave superimposed to irregular seas. During the experimental optimization special emphasis is laid on the exact reproduction of the wave height, crest height, wave period, as well as the vertical and horizontal asymmetries of the New Year Wave. The fully automated optimization process is carried out in a small wave tank. At the beginning of the optimization process, the scaled real-sea measured sea state is transformed back to the position of the piston type wave generator by means of linear wave theory and by multiplication with the electrical and hydrodynamic transfer functions in the frequency domain. As a result a preliminary control signal for the wave generator is obtained. Due to nonlinear effects in the wave tank, the registration of the freak wave at the target position generated by this preliminary control signal deviates from the predefined target parameters. To improve the target wave in the tank only a short section of the control signal in time domain has to be adapted. For these temporally limited local changes in the control signal, the discrete wavelet transformation is introduced into the optimization process which samples the signal into several decomposition levels where each resulting coefficient describes the control signal in a specific time range and frequency bandwidth. To improve the control signal, the experimental optimization routine iterates until the target parameters are satisfied by applying the subplex optimization method. The resulting control signal in the small wave tank is then transferred to a large wave tank considering the electrical and hydrodynamic RAOs of the respective wave generator. The extreme sea state with the embedded New Year Wave obtained with this method is measured at different locations in the tank, in a range from 2163 m (full scale) ahead of to 1470 m behind the target position-520 registrations altogether. The focus lies on the detailed description of a possible evolution of the New Year Wave over a large area and time interval. The analysis of the registrations reveals freak waves occurring at three different positions in the wave tank and the observed freak waves are developing from a wave group of three waves, which travels with constant speed along the wave tank up to the target position. The group velocity, wave propagation, and the energy flux of this wave group are analyzed within this paper.
Multiband phase-modulated radio over IsOWC link with balanced coherent homodyne detection
NASA Astrophysics Data System (ADS)
Zong, Kang; Zhu, Jiang
2017-11-01
In this paper, we present a multiband phase-modulated radio over intersatellite optical wireless communication (IsOWC) link with balanced coherent homodyne detection. The proposed system can provide high linearity for transparent transport of multiband radio frequency (RF) signals and better receiver sensitivity than intensity modulated with direct detection (IM/DD) system. The exact analytical expression of signal to noise and distortion ratio (SNDR) is derived considering the third-order intermodulation product and amplifier spontaneous emission (ASE) noise. Numerical results of SNDR with various number of subchannels and modulation index are given. Results indicate that the optimal modulation index exists to maximize the SNDR. With the same system parameters, the value of the optimal modulation index will decrease with the increase of number of subchannels.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Junhwan; Hwang, Sungui; Park, Kyihwan, E-mail: khpark@gist.ac.kr
To utilize a time-of-flight-based laser scanner as a distance measurement sensor, the measurable distance and accuracy are the most important performance parameters to consider. For these purposes, the optical system and electronic signal processing of the laser scanner should be optimally designed in order to reduce a distance error caused by the optical crosstalk and wide dynamic range input. Optical system design for removing optical crosstalk problem is proposed in this work. Intensity control is also considered to solve the problem of a phase-shift variation in the signal processing circuit caused by object reflectivity. The experimental results for optical systemmore » and signal processing design are performed using 3D measurements.« less
An optimal state estimation model of sensory integration in human postural balance
NASA Astrophysics Data System (ADS)
Kuo, Arthur D.
2005-09-01
We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.
Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation
NASA Technical Reports Server (NTRS)
Locicero, J. L.; Schilling, D. L.
1977-01-01
An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Mesbah, Samineh; Angeli, Claudia A; Keynton, Robert S; El-Baz, Ayman; Harkema, Susan J
2017-01-01
Voluntary movements and the standing of spinal cord injured patients have been facilitated using lumbosacral spinal cord epidural stimulation (scES). Identifying the appropriate stimulation parameters (intensity, frequency and anode/cathode assignment) is an arduous task and requires extensive mapping of the spinal cord using evoked potentials. Effective visualization and detection of muscle evoked potentials induced by scES from the recorded electromyography (EMG) signals is critical to identify the optimal configurations and the effects of specific scES parameters on muscle activation. The purpose of this work was to develop a novel approach to automatically detect the occurrence of evoked potentials, quantify the attributes of the signal and visualize the effects across a high number of scES parameters. This new method is designed to automate the current process for performing this task, which has been accomplished manually by data analysts through observation of raw EMG signals, a process that is laborious and time-consuming as well as prone to human errors. The proposed method provides a fast and accurate five-step algorithms framework for activation detection and visualization of the results including: conversion of the EMG signal into its 2-D representation by overlaying the located signal building blocks; de-noising the 2-D image by applying the Generalized Gaussian Markov Random Field technique; detection of the occurrence of evoked potentials using a statistically optimal decision method through the comparison of the probability density functions of each segment to the background noise utilizing log-likelihood ratio; feature extraction of detected motor units such as peak-to-peak amplitude, latency, integrated EMG and Min-max time intervals; and finally visualization of the outputs as Colormap images. In comparing the automatic method vs. manual detection on 700 EMG signals from five individuals, the new approach decreased the processing time from several hours to less than 15 seconds for each set of data, and demonstrated an average accuracy of 98.28% based on the combined false positive and false negative error rates. The sensitivity of this method to the signal-to-noise ratio (SNR) was tested using simulated EMG signals and compared to two existing methods, where the novel technique showed much lower sensitivity to the SNR.
Mesbah, Samineh; Angeli, Claudia A.; Keynton, Robert S.; Harkema, Susan J.
2017-01-01
Voluntary movements and the standing of spinal cord injured patients have been facilitated using lumbosacral spinal cord epidural stimulation (scES). Identifying the appropriate stimulation parameters (intensity, frequency and anode/cathode assignment) is an arduous task and requires extensive mapping of the spinal cord using evoked potentials. Effective visualization and detection of muscle evoked potentials induced by scES from the recorded electromyography (EMG) signals is critical to identify the optimal configurations and the effects of specific scES parameters on muscle activation. The purpose of this work was to develop a novel approach to automatically detect the occurrence of evoked potentials, quantify the attributes of the signal and visualize the effects across a high number of scES parameters. This new method is designed to automate the current process for performing this task, which has been accomplished manually by data analysts through observation of raw EMG signals, a process that is laborious and time-consuming as well as prone to human errors. The proposed method provides a fast and accurate five-step algorithms framework for activation detection and visualization of the results including: conversion of the EMG signal into its 2-D representation by overlaying the located signal building blocks; de-noising the 2-D image by applying the Generalized Gaussian Markov Random Field technique; detection of the occurrence of evoked potentials using a statistically optimal decision method through the comparison of the probability density functions of each segment to the background noise utilizing log-likelihood ratio; feature extraction of detected motor units such as peak-to-peak amplitude, latency, integrated EMG and Min-max time intervals; and finally visualization of the outputs as Colormap images. In comparing the automatic method vs. manual detection on 700 EMG signals from five individuals, the new approach decreased the processing time from several hours to less than 15 seconds for each set of data, and demonstrated an average accuracy of 98.28% based on the combined false positive and false negative error rates. The sensitivity of this method to the signal-to-noise ratio (SNR) was tested using simulated EMG signals and compared to two existing methods, where the novel technique showed much lower sensitivity to the SNR. PMID:29020054
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.
2018-04-01
Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.
Enhanced Gravitational Wave Science with LISA and gLISA.
NASA Astrophysics Data System (ADS)
Tinto, Massimo
2017-05-01
The geosynchronous Laser Interferometer Space Antenna (gLISA) is a space-based gravitational wave (GW) mission that, for the past five years, has been under joint study at the Jet Propulsion Laboratory, Stanford University, the National Institute for Space Research (I.N.P.E., Brazil), and Space Systems Loral. With an arm length of 73,000 km, gLISA will display optimal sensitivity over a frequency region that is exactly in between those accessible by LISA and LIGO. Such a GW frequency band is characterized by the presence of a very large ensemble of coalescing black-hole binaries (BHBs) similar to those first observed by LIGO and with masses that are 10 to 100 times the mass of the Sun. gLISA will detect thousands of such signals with good signal-to-noise ratio (SNR) and enhance the LIGO science by measuring with high precision the parameters characterizing such signals (source direction, chirp parameter, time to coalescence, etc.) well before they will enter the LIGO band. This valuable information will improve LIGO’s ability to detect these signals and facilitate its study of the merger and ring-down phases not observable by space-based detectors. If flown at the same time as the LISA mission, the two arrays will deliver a joint sensitivity that accounts for the best performance of both missions in their respective parts of the milliHertz band. This simultaneous operation will result in an optimally combined sensitivity curve that is “white” from about 3 × 10-3 Hz to 1 Hz, making the two antennas capable of detecting, with high signal-to-noise ratios (SNRs), BHBs with masses in the range (10 - 107)M ⊙. Their ability of jointly tracking, with enhanced SNR, signals similar to that observed by the Advanced Laser Interferometer Gravitational Wave Observatory (aLIGO) on September 14, 2015 (the GW150914 event) will result in a larger number of observable small-mass binary black-holes and an improved precision of the parameters characterizing these sources. Together, LISA, gLISA and aLIGO will cover, with good sensitivity, the (10-4 - 103) Hz frequency band.
Parameter-induced stochastic resonance with a periodic signal
NASA Astrophysics Data System (ADS)
Li, Jian-Long; Xu, Bo-Hou
2006-12-01
In this paper conventional stochastic resonance (CSR) is realized by adding the noise intensity. This demonstrates that tuning the system parameters with fixed noise can make the noise play a constructive role and realize parameter-induced stochastic resonance (PSR). PSR can be interpreted as changing the intrinsic characteristic of the dynamical system to yield the cooperative effect between the stochastic-subjected nonlinear system and the external periodic force. This can be realized at any noise intensity, which greatly differs from CSR that is realized under the condition of the initial noise intensity not greater than the resonance level. Moreover, it is proved that PSR is different from the optimization of system parameters.
A new FPGA-driven P-HIFU system with harmonic cancellation technique
NASA Astrophysics Data System (ADS)
Wu, Hao; Shen, Guofeng; Su, Zhiqiang; Chen, Yazhu
2017-03-01
This paper introduces a high intensity focused ultrasound system for ablation using switch-mode power amplifiers with harmonic cancellation technique eliminating the 3rdharmonic and all even harmonics. The efficiency of the amplifier is optimized by choosing different parameters of the harmonic cancellation technique. This technique requires double driving signals, and specific signal waveform because of the full-bridge topology. The new FPGA-driven P-HIFU system has 200 channels of phase signals that can form 100 output channels. An FPGA chip is used to generate these signals, and each channel has a phase resolution of 2 ns, less than one degree. The output waveform of the amplifier, voltage waveform across the transducer, shows fewer harmonic components.
FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides
NASA Astrophysics Data System (ADS)
Aleshkevich, Viktor A.; Kozhoridze, G. D.
1990-07-01
An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.
Ecological and economical efficiency of monitoring systems for oil and gas production on the shelf
NASA Astrophysics Data System (ADS)
Kurakin, A. L.; Lobkovsky, L. I.
2014-02-01
Requirements for signals' reliability of monitoring systems (with respect to the errors of the 1st and 2nd kinds, i.e., false alarms and skipping of danger) are deduced from the ratio of expenditures of different kinds (of exploitation expenses and losses due to accidents). The expressions obtained in the research may be used for economic foundations (and optimization) of specifications for monitoring systems. In cases when optimal parameters are not available, the sufficient condition of monitoring systems economical efficiency is presented.
Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil
2014-09-01
In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.
Maximally informative pairwise interactions in networks
Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.
2010-01-01
Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco
2015-02-05
One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
NASA Astrophysics Data System (ADS)
Kosulya, A. V.; Verbitskii, V. G.
2017-11-01
A mathematical model of the response of a microchannel multiplier based on two microchannel plates in the chevron assembly has been considered. Analytical expressions relating the parameters of input and output signals have been obtained. The geometry of the chevron unit has been determined, and it has been optimized.
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
Magnetoelectric force microscopy based on magnetic force microscopy with modulated electric field.
Geng, Yanan; Wu, Weida
2014-05-01
We present the realization of a mesoscopic imaging technique, namely, the Magnetoelectric Force Microscopy (MeFM), for visualization of local magnetoelectric effect. The basic principle of MeFM is the lock-in detection of local magnetoelectric response, i.e., the electric field-induced magnetization, using magnetic force microscopy. We demonstrate MeFM capability by visualizing magnetoelectric domains on single crystals of multiferroic hexagonal manganites. Results of several control experiments exclude artifacts or extrinsic origins of the MeFM signal. The parameters are tuned to optimize the signal to noise ratio.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
Simultaneous detection of resolved glutamate, glutamine, and γ-aminobutyric acid at 4 T
NASA Astrophysics Data System (ADS)
Hu, Jiani; Yang, Shaolin; Xuan, Yang; Jiang, Quan; Yang, Yihong; Haacke, E. Mark
2007-04-01
A new approach is introduced to simultaneously detect resolved glutamate (Glu), glutamine (Gln), and γ-aminobutyric acid (GABA) using a standard STEAM localization pulse sequence with the optimized sequence timing parameters. This approach exploits the dependence of the STEAM spectra of the strongly coupled spin systems of Glu, Gln, and GABA on the echo time TE and the mixing time TM at 4 T to find an optimized sequence parameter set, i.e., {TE, TM}, where the outer-wings of the Glu C4 multiplet resonances around 2.35 ppm, the Gln C4 multiplet resonances around 2.45 ppm, and the GABA C2 multiplet resonance around 2.28 ppm are significantly suppressed and the three resonances become virtual singlets simultaneously and thus resolved. Spectral simulation and optimization were conducted to find the optimized sequence parameters, and phantom and in vivo experiments (on normal human brains, one patient with traumatic brain injury, and one patient with brain tumor) were carried out for verification. The results have demonstrated that the Gln, Glu, and GABA signals at 2.2-2.5 ppm can be well resolved using a standard STEAM sequence with the optimized sequence timing parameters around {82 ms, 48 ms} at 4 T, while the other main metabolites, such as N-acetyl aspartate (NAA), choline (tCho), and creatine (tCr), are still preserved in the same spectrum. The technique can be easily implemented and should prove to be a useful tool for the basic and clinical studies associated with metabolism of Glu, Gln, and/or GABA.
Bignardi, Chiara; Cavazza, Antonella; Laganà, Carmen; Salvadeo, Paola; Corradini, Claudio
2018-01-01
The interest towards "substances of emerging concerns" referred to objects intended to come into contact with food is recently growing. Such substances can be found in traces in simulants and in food products put in contact with plastic materials. In this context, it is important to set up analytical systems characterized by high sensitivity and to improve detection parameters to enhance signals. This work was aimed at optimizing a method based on UHPLC coupled to high resolution mass spectrometry to quantify the most common plastic additives, and able to detect the presence of polymers degradation products and coloring agents migrating from plastic re-usable containers. The optimization of mass spectrometric parameter settings for quantitative analysis of additives has been achieved by a chemometric approach, using a full factorial and d-optimal experimental designs, allowing to evaluate possible interactions between the investigated parameters. Results showed that the optimized method was characterized by improved features in terms of sensitivity respect to existing methods and was successfully applied to the analysis of a complex model food system such as chocolate put in contact with 14 polycarbonate tableware samples. A new procedure for sample pre-treatment was carried out and validated, showing high reliability. Results reported, for the first time, the presence of several molecules migrating to chocolate, in particular belonging to plastic additives, such Cyasorb UV5411, Tinuvin 234, Uvitex OB, and oligomers, whose amount was found to be correlated to age and degree of damage of the containers. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Park, Sang Chul
1989-09-01
We develop a mathematical analysis model to calculate the probability of intercept (POI) for the ground-based communication intercept (COMINT) system. The POI is a measure of the effectiveness of the intercept system. We define the POI as the product of the probability of detection and the probability of coincidence. The probability of detection is a measure of the receiver's capability to detect a signal in the presence of noise. The probability of coincidence is the probability that an intercept system is available, actively listening in the proper frequency band, in the right direction and at the same time that the signal is received. We investigate the behavior of the POI with respect to the observation time, the separation distance, antenna elevations, the frequency of the signal, and the receiver bandwidths. We observe that the coincidence characteristic between the receiver scanning parameters and the signal parameters is the key factor to determine the time to obtain a given POI. This model can be used to find the optimal parameter combination to maximize the POI in a given scenario. We expand this model to a multiple system. This analysis is conducted on a personal computer to provide the portability. The model is also flexible and can be easily implemented under different situations.
Throughput and latency programmable optical transceiver by using DSP and FEC control.
Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki
2017-05-15
We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.
Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.
2017-01-01
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477
Sobotta, Svantje; Raue, Andreas; Huang, Xiaoyun; Vanlier, Joep; Jünger, Anja; Bohl, Sebastian; Albrecht, Ute; Hahnel, Maximilian J.; Wolf, Stephanie; Mueller, Nikola S.; D'Alessandro, Lorenza A.; Mueller-Bohl, Stephanie; Boehm, Martin E.; Lucarelli, Philippe; Bonefas, Sandra; Damm, Georg; Seehofer, Daniel; Lehmann, Wolf D.; Rose-John, Stefan; van der Hoeven, Frank; Gretz, Norbert; Theis, Fabian J.; Ehlting, Christian; Bode, Johannes G.; Timmer, Jens; Schilling, Marcel; Klingmüller, Ursula
2017-01-01
IL-6 is a central mediator of the immediate induction of hepatic acute phase proteins (APP) in the liver during infection and after injury, but increased IL-6 activity has been associated with multiple pathological conditions. In hepatocytes, IL-6 activates JAK1-STAT3 signaling that induces the negative feedback regulator SOCS3 and expression of APPs. While different inhibitors of IL-6-induced JAK1-STAT3-signaling have been developed, understanding their precise impact on signaling dynamics requires a systems biology approach. Here we present a mathematical model of IL-6-induced JAK1-STAT3 signaling that quantitatively links physiological IL-6 concentrations to the dynamics of IL-6-induced signal transduction and expression of target genes in hepatocytes. The mathematical model consists of coupled ordinary differential equations (ODE) and the model parameters were estimated by a maximum likelihood approach, whereas identifiability of the dynamic model parameters was ensured by the Profile Likelihood. Using model simulations coupled with experimental validation we could optimize the long-term impact of the JAK-inhibitor Ruxolitinib, a therapeutic compound that is quickly metabolized. Model-predicted doses and timing of treatments helps to improve the reduction of inflammatory APP gene expression in primary mouse hepatocytes close to levels observed during regenerative conditions. The concept of improved efficacy of the inhibitor through multiple treatments at optimized time intervals was confirmed in primary human hepatocytes. Thus, combining quantitative data generation with mathematical modeling suggests that repetitive treatment with Ruxolitinib is required to effectively target excessive inflammatory responses without exceeding doses recommended by the clinical guidelines. PMID:29062282
Paíga, Paula; Silva, Luís M S; Delerue-Matos, Cristina
2016-10-01
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.
Sobotta, Svantje; Raue, Andreas; Huang, Xiaoyun; Vanlier, Joep; Jünger, Anja; Bohl, Sebastian; Albrecht, Ute; Hahnel, Maximilian J; Wolf, Stephanie; Mueller, Nikola S; D'Alessandro, Lorenza A; Mueller-Bohl, Stephanie; Boehm, Martin E; Lucarelli, Philippe; Bonefas, Sandra; Damm, Georg; Seehofer, Daniel; Lehmann, Wolf D; Rose-John, Stefan; van der Hoeven, Frank; Gretz, Norbert; Theis, Fabian J; Ehlting, Christian; Bode, Johannes G; Timmer, Jens; Schilling, Marcel; Klingmüller, Ursula
2017-01-01
IL-6 is a central mediator of the immediate induction of hepatic acute phase proteins (APP) in the liver during infection and after injury, but increased IL-6 activity has been associated with multiple pathological conditions. In hepatocytes, IL-6 activates JAK1-STAT3 signaling that induces the negative feedback regulator SOCS3 and expression of APPs. While different inhibitors of IL-6-induced JAK1-STAT3-signaling have been developed, understanding their precise impact on signaling dynamics requires a systems biology approach. Here we present a mathematical model of IL-6-induced JAK1-STAT3 signaling that quantitatively links physiological IL-6 concentrations to the dynamics of IL-6-induced signal transduction and expression of target genes in hepatocytes. The mathematical model consists of coupled ordinary differential equations (ODE) and the model parameters were estimated by a maximum likelihood approach, whereas identifiability of the dynamic model parameters was ensured by the Profile Likelihood. Using model simulations coupled with experimental validation we could optimize the long-term impact of the JAK-inhibitor Ruxolitinib, a therapeutic compound that is quickly metabolized. Model-predicted doses and timing of treatments helps to improve the reduction of inflammatory APP gene expression in primary mouse hepatocytes close to levels observed during regenerative conditions. The concept of improved efficacy of the inhibitor through multiple treatments at optimized time intervals was confirmed in primary human hepatocytes. Thus, combining quantitative data generation with mathematical modeling suggests that repetitive treatment with Ruxolitinib is required to effectively target excessive inflammatory responses without exceeding doses recommended by the clinical guidelines.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
Signal intensity analysis and optimization for in vivo imaging of Cherenkov and excited luminescence
NASA Astrophysics Data System (ADS)
LaRochelle, Ethan P. M.; Shell, Jennifer R.; Gunn, Jason R.; Davis, Scott C.; Pogue, Brian W.
2018-04-01
During external beam radiotherapy (EBRT), in vivo Cherenkov optical emissions can be used as a dosimetry tool or to excite luminescence, termed Cherenkov-excited luminescence (CEL) with microsecond-level time-gated cameras. The goal of this work was to develop a complete theoretical foundation for the detectable signal strength, in order to provide guidance on optimization of the limits of detection and how to optimize near real time imaging. The key parameters affecting photon production, propagation and detection were considered and experimental validation with both tissue phantoms and a murine model are shown. Both the theoretical analysis and experimental data indicate that the detection level is near a single photon-per-pixel for the detection geometry and frame rates commonly used, with the strongest factor being the signal decrease with the square of distance from tissue to camera. Experimental data demonstrates how the SNR improves with increasing integration time, but only up to the point where the dominance of camera read noise is overcome by stray photon noise that cannot be suppressed. For the current camera in a fixed geometry, the signal to background ratio limits the detection of light signals, and the observed in vivo Cherenkov emission is on the order of 100× stronger than CEL signals. As a result, imaging signals from depths <15 mm is reasonable for Cherenkov light, and depths <3 mm is reasonable for CEL imaging. The current investigation modeled Cherenkov and CEL imaging of two oxygen sensing phosphorescent compounds, but the modularity of the code allows for easy comparison of different agents or alternative cameras, geometries or tissues.
Thakore, Vaibhav; Molnar, Peter; Hickman, James J.
2014-01-01
Extracellular neuroelectronic interfacing is an emerging field with important applications in the fields of neural prosthetics, biological computation and biosensors. Traditionally, neuron-electrode interfaces have been modeled as linear point or area contact equivalent circuits but it is now being increasingly realized that such models cannot explain the shapes and magnitudes of the observed extracellular signals. Here, results were compared and contrasted from an unprecedented optimization based study of the point contact models for an extracellular ‘on-cell’ neuron-patch electrode and a planar neuron-microelectrode interface. Concurrent electrophysiological recordings from a single neuron simultaneously interfaced to three distinct electrodes (intracellular, ‘on-cell’ patch and planar microelectrode) allowed novel insights into the mechanism of signal transduction at the neuron-electrode interface. After a systematic isolation of the nonlinear neuronal contribution to the extracellular signal, a consistent underestimation of the simulated supra-threshold extracellular signals compared to the experimentally recorded signals was observed. This conclusively demonstrated that the dynamics of the interfacial medium contribute nonlinearly to the process of signal transduction at the neuron-electrode interface. Further, an examination of the optimized model parameters for the experimental extracellular recordings from sub- and supra-threshold stimulations of the neuron-electrode junctions revealed that ionic transport at the ‘on-cell’ neuron-patch electrode is dominated by diffusion whereas at the neuron-microelectrode interface the electric double layer (EDL) effects dominate. Based on this study, the limitations of the equivalent circuit models in their failure to account for the nonlinear EDL and ionic electrodiffusion effects occurring during signal transduction at the neuron-electrode interfaces are discussed. PMID:22695342
LaRochelle, Ethan P M; Shell, Jennifer R; Gunn, Jason R; Davis, Scott C; Pogue, Brian W
2018-04-20
During external beam radiotherapy (EBRT), in vivo Cherenkov optical emissions can be used as a dosimetry tool or to excite luminescence, termed Cherenkov-excited luminescence (CEL) with microsecond-level time-gated cameras. The goal of this work was to develop a complete theoretical foundation for the detectable signal strength, in order to provide guidance on optimization of the limits of detection and how to optimize near real time imaging. The key parameters affecting photon production, propagation and detection were considered and experimental validation with both tissue phantoms and a murine model are shown. Both the theoretical analysis and experimental data indicate that the detection level is near a single photon-per-pixel for the detection geometry and frame rates commonly used, with the strongest factor being the signal decrease with the square of distance from tissue to camera. Experimental data demonstrates how the SNR improves with increasing integration time, but only up to the point where the dominance of camera read noise is overcome by stray photon noise that cannot be suppressed. For the current camera in a fixed geometry, the signal to background ratio limits the detection of light signals, and the observed in vivo Cherenkov emission is on the order of 100× stronger than CEL signals. As a result, imaging signals from depths <15 mm is reasonable for Cherenkov light, and depths <3 mm is reasonable for CEL imaging. The current investigation modeled Cherenkov and CEL imaging of two oxygen sensing phosphorescent compounds, but the modularity of the code allows for easy comparison of different agents or alternative cameras, geometries or tissues.
Using diurnal temperature signals to infer vertical groundwater-surface water exchange
Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.
NASA Astrophysics Data System (ADS)
Khalil, A. A. I.
2015-12-01
Double-pulse lasers ablation (DPLA) technique was developed to generate gold (Au) ion source and produce high current under applying an electric potential in an argon ambient gas environment. Two Q-switched Nd:YAG lasers operating at 1064 and 266 nm wavelengths are combined in an unconventional orthogonal (crossed-beam) double-pulse configuration with 45° angle to focus on a gold target along with a spectrometer for spectral analysis of gold plasma. The properties of gold plasma produced under double-pulse lasers excitation were studied. The velocity distribution function (VDF) of the emitted plasma was studied using a dedicated Faraday-cup ion probe (FCIP) under argon gas discharge. The experimental parameters were optimized to attain the best signal to noise (S/N) ratio. The results depicted that the VDF and current signals depend on the discharge applied voltage, laser intensity, laser wavelength and ambient argon gas pressure. A seven-fold increases in the current signal by increasing the discharge applied voltage and ion velocity under applying double-pulse lasers field. The plasma parameters (electron temperature and density) were also studied and their dependence on the delay (times between the excitation laser pulse and the opening of camera shutter) was investigated as well. This study could provide significant reference data for the optimization and design of DPLA systems engaged in laser induced plasma deposition thin films and facing components diagnostics.
Advanced Fire Detector for Space Applications
NASA Technical Reports Server (NTRS)
Kutzner, Joerg
2012-01-01
A document discusses an optical carbon monoxide sensor for early fire detection. During the sensor development, a concept was implemented to allow reliable carbon monoxide detection in the presence of interfering absorption signals. Methane interference is present in the operating wavelength range of the developed prototype sensor for carbon monoxide detection. The operating parameters of the prototype sensor have been optimized so that interference with methane is minimized. In addition, simultaneous measurement of methane is implemented, and the instrument automatically corrects the carbon monoxide signal at high methane concentrations. This is possible because VCSELs (vertical cavity surface emitting lasers) with extended current tuning capabilities are implemented in the optical device. The tuning capabilities of these new laser sources are sufficient to cover the wavelength range of several absorption lines. The delivered carbon monoxide sensor (COMA 1) reliably measures low carbon monoxide levels even in the presence of high methane signals. The signal bleed-over is determined during system calibration and is then accounted for in the system parameters. The sensor reports carbon monoxide concentrations reliably for (interfering) methane concentrations up to several thousand parts per million.
In Search of Determinism-Sensitive Region to Avoid Artefacts in Recurrence Plots
NASA Astrophysics Data System (ADS)
Wendi, Dadiyorto; Marwan, Norbert; Merz, Bruno
As an effort to reduce parameter uncertainties in constructing recurrence plots, and in particular to avoid potential artefacts, this paper presents a technique to derive artefact-safe region of parameter sets. This technique exploits both deterministic (incl. chaos) and stochastic signal characteristics of recurrence quantification (i.e. diagonal structures). It is useful when the evaluated signal is known to be deterministic. This study focuses on the recurrence plot generated from the reconstructed phase space in order to represent many real application scenarios when not all variables to describe a system are available (data scarcity). The technique involves random shuffling of the original signal to destroy its original deterministic characteristics. Its purpose is to evaluate whether the determinism values of the original and the shuffled signal remain closely together, and therefore suggesting that the recurrence plot might comprise artefacts. The use of such determinism-sensitive region shall be accompanied by standard embedding optimization approaches, e.g. using indices like false nearest neighbor and mutual information, to result in a more reliable recurrence plot parameterization.
Kalman Orbit Optimized Loop Tracking
NASA Technical Reports Server (NTRS)
Young, Lawrence E.; Meehan, Thomas K.
2011-01-01
Under certain conditions of low signal power and/or high noise, there is insufficient signal to noise ratio (SNR) to close tracking loops with individual signals on orbiting Global Navigation Satellite System (GNSS) receivers. In addition, the processing power available from flight computers is not great enough to implement a conventional ultra-tight coupling tracking loop. This work provides a method to track GNSS signals at very low SNR without the penalty of requiring very high processor throughput to calculate the loop parameters. The Kalman Orbit-Optimized Loop (KOOL) tracking approach constitutes a filter with a dynamic model and using the aggregate of information from all tracked GNSS signals to close the tracking loop for each signal. For applications where there is not a good dynamic model, such as very low orbits where atmospheric drag models may not be adequate to achieve the required accuracy, aiding from an IMU (inertial measurement unit) or other sensor will be added. The KOOL approach is based on research JPL has done to allow signal recovery from weak and scintillating signals observed during the use of GPS signals for limb sounding of the Earth s atmosphere. That approach uses the onboard PVT (position, velocity, time) solution to generate predictions for the range, range rate, and acceleration of the low-SNR signal. The low- SNR signal data are captured by a directed open loop. KOOL builds on the previous open loop tracking by including feedback and observable generation from the weak-signal channels so that the MSR receiver will continue to track and provide PVT, range, and Doppler data, even when all channels have low SNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn
2015-12-20
Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less
NASA Astrophysics Data System (ADS)
Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui
2018-01-01
Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.
Software for Acoustic Rendering
NASA Technical Reports Server (NTRS)
Miller, Joel D.
2003-01-01
SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.
Comparing transformation methods for DNA microarray data
Thygesen, Helene H; Zwinderman, Aeilko H
2004-01-01
Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953
Comparing transformation methods for DNA microarray data.
Thygesen, Helene H; Zwinderman, Aeilko H
2004-06-17
When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Optimization of an organic memristor as an adaptive memory element
NASA Astrophysics Data System (ADS)
Berzina, Tatiana; Smerieri, Anteo; Bernabò, Marco; Pucci, Andrea; Ruggeri, Giacomo; Erokhin, Victor; Fontana, M. P.
2009-06-01
The combination of memory and signal handling characteristics of a memristor makes it a promising candidate for adaptive bioinspired information processing systems. This poses stringent requirements on the basic device, such as stability and reproducibility over a large number of training/learning cycles, and a large anisotropy in the fundamental control material parameter, in our case the electrical conductivity. In this work we report results on the improved performance of electrochemically controlled polymeric memristors, where optimization of a conducting polymer (polyaniline) in the active channel and better environmental control of fabrication methods led to a large increase both in the absolute values of the conductivity in the partially oxydized state of polyaniline and of the on-off conductivity ratio. These improvements are crucial for the application of the organic memristor to adaptive complex signal handling networks.
Taguchi experimental design to determine the taste quality characteristic of candied carrot
NASA Astrophysics Data System (ADS)
Ekawati, Y.; Hapsari, A. A.
2018-03-01
Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Raman-spectroscopy-based chemical contaminant detection in milk powder
NASA Astrophysics Data System (ADS)
Dhakal, Sagar; Chao, Kuanglin; Qin, Jianwei; Kim, Moon S.
2015-05-01
Addition of edible and inedible chemical contaminants in food powders for purposes of economic benefit has become a recurring trend. In recent years, severe health issues have been reported due to consumption of food powders contaminated with chemical substances. This study examines the effect of spatial resolution used during spectral collection to select the optimal spatial resolution for detecting melamine in milk powder. Sample depth of 2mm, laser intensity of 200mw, and exposure time of 0.1s were previously determined as optimal experimental parameters for Raman imaging. Spatial resolution of 0.25mm was determined as the optimal resolution for acquiring spectral signal of melamine particles from a milk-melamine mixture sample. Using the optimal resolution of 0.25mm, sample depth of 2mm and laser intensity of 200mw obtained from previous study, spectral signal from 5 different concentration of milk-melamine mixture (1%, 0.5%, 0.1%, 0.05%, and 0.025%) were acquired to study the relationship between number of detected melamine pixels and corresponding sample concentration. The result shows that melamine concentration has a linear relation with detected number of melamine pixels with correlation coefficient of 0.99. It can be concluded that the quantitative analysis of powder mixture is dependent on many factors including physical characteristics of mixture, experimental parameters, and sample depth. The results obtained in this study are promising. We plan to apply the result obtained from this study to develop quantitative detection model for rapid screening of melamine in milk powder. This methodology can also be used for detection of other chemical contaminants in milk powders.
Raman Amplification and Tunable Pulse Delays in Silicon Waveguides
NASA Astrophysics Data System (ADS)
Rukhlenko, Ivan D.; Garanovich, Ivan L.; Premaratne, Malin; Sukhorukov, Andrey A.; Agrawal, Govind P.
2010-10-01
The nonlinear process of stimulated Raman scattering is important for silicon photonics as it enables optical amplification and lasing. However, generally employed numerical approaches provide very little insight into the contribution of different silicon Raman amplifier (SRA) parameters. In this paper, we solve the coupled pump-signal equations analytically and derive an exact formula for the envelope of a signal pulse when picosecond optical pulses are amplified inside a SRA pumped by a continuous-wave laser beam. Our solution is valid for an arbitrary pulse shape and fully accounts for the Raman gain-dispersion effects, including temporal broadening and group-velocity reduction. Our results are useful for optimizing the performance of SRAs and for engineering controllable signal delays.
An adaptive DPCM algorithm for predicting contours in NTSC composite video signals
NASA Astrophysics Data System (ADS)
Cox, N. R.
An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. The contour parameters (slope thresholds) are optimized using four 'typical' television frames that have been sampled at three times the color subcarrier frequency. Three variations of the basic predictor are simulated and compared quantitatively with three non-adaptive predictors of similar complexity. By incorporating a dual-word-length coder and buffer memory, high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. The effect of channel error propagation is also investigated.
Recognition of digital characteristics based new improved genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, Meng; Xu, Guoqiang; Lin, Zihao
2017-08-01
In the field of digital signal processing, Estimating the characteristics of signal modulation parameters is an significant research direction. The paper determines the set of eigenvalue which can show the difference of the digital signal modulation based on the deep research of the new improved genetic algorithm. Firstly take them as the best gene pool; secondly, The best gene pool will be changed in the genetic evolvement by selecting, overlapping and eliminating each other; Finally, Adapting the strategy of futher enhance competition and punishment to more optimizer the gene pool and ensure each generation are of high quality gene. The simulation results show that this method not only has the global convergence, stability and faster convergence speed.
Kim, Keonwook
2013-08-23
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.
Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun
2016-04-01
Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.
Multidisciplinary design optimization for sonic boom mitigation
NASA Astrophysics Data System (ADS)
Ozcer, Isik A.
Automated, parallelized, time-efficient surface definition and grid generation and flow simulation methods are developed for sharp and accurate sonic boom signal computation in three dimensions in the near and mid-field of an aircraft using Euler/Full-Potential unstructured/structured computational fluid dynamics. The full-potential mid-field sonic boom prediction code is an accurate and efficient solver featuring automated grid generation, grid adaptation and shock fitting, and parallel processing. This program quickly marches the solution using a single nonlinear equation for large distances that cannot be covered with Euler solvers due to large memory and long computational time requirements. The solver takes into account variations in temperature and pressure with altitude. The far-field signal prediction is handled using the classical linear Thomas Waveform Parameter Method where the switching altitude from the nonlinear to linear prediction is determined by convergence of the ground signal pressure impulse value. This altitude is determined as r/L ≈ 10 from the source for a simple lifting wing, and r/L ≈ 40 for a real complex aircraft. Unstructured grid adaptation and shock fitting methodology developed for the near-field analysis employs an Hessian based anisotropic grid adaptation based on error equidistribution. A special field scalar is formulated to be used in the computation of the Hessian based error metric which enhances significantly the adaptation scheme for shocks. The entire cross-flow of a complex aircraft is resolved with high fidelity using only 500,000 grid nodes after only about 10 solution/adaptation cycles. Shock fitting is accomplished using Roe's Flux-Difference Splitting scheme which is an approximate Riemann type solver and by proper alignment of the cell faces with respect to shock surfaces. Simple to complex real aircraft geometries are handled with no user-interference required making the simulation methods suitable tools for product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
The modeling of MMI structures for signal processing applications
NASA Astrophysics Data System (ADS)
Le, Thanh Trung; Cahill, Laurence W.
2008-02-01
Microring resonators are promising candidates for photonic signal processing applications. However, almost all resonators that have been reported so far use directional couplers or 2×2 multimode interference (MMI) couplers as the coupling element between the ring and the bus waveguides. In this paper, instead of using 2×2 couplers, novel structures for microring resonators based on 3×3 MMI couplers are proposed. The characteristics of the device are derived using the modal propagation method. The device parameters are optimized by using numerical methods. Optical switches and filters using Silicon on Insulator (SOI) then have been designed and analyzed. This device can become a new basic component for further applications in optical signal processing. The paper concludes with some further examples of photonic signal processing circuits based on MMI couplers.
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
Mehand, Massinissa Si; Srinivasan, Bala; De Crescenzo, Gregory
2015-01-01
Surface plasmon resonance-based biosensors have been successfully applied to the study of the interactions between macromolecules and small molecular weight compounds. In an effort to increase the throughput of these SPR-based experiments, we have already proposed to inject multiple compounds simultaneously over the same surface. When specifically applied to small molecular weight compounds, such a strategy would however require prior knowledge of the refractive index increment of each compound in order to correctly interpret the recorded signal. An additional experiment is typically required to obtain this information. In this manuscript, we show that through the introduction of an additional global parameter corresponding to the ratio of the saturating signals associated with each molecule, the kinetic parameters could be identified with similar confidence intervals without any other experimentation. PMID:26515024
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
NASA Astrophysics Data System (ADS)
Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi
2018-02-01
The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.
Monjure, C. J.; Tatum, C. D.; Panganiban, A. T.; Arainga, M.; Traina-Dorge, V.; Marx, P. A.; Didier, E. S.
2014-01-01
Introduction Quantification of plasma viral load (PVL) is used to monitor disease progression in SIV-infected macaques. This study was aimed at optimizing of performance characteristics of the quantitative PCR (qPCR) PVL assay. Methods The PVL quantification procedure was optimized by inclusion of an exogenous control Hepatitis C Virus armored RNA (aRNA), a plasma concentration step, extended digestion with proteinase K, and a second RNA elution step. Efficiency of viral RNA (vRNA) extraction was compared using several commercial vRNA extraction kits. Various parameters of qPCR targeting the gag region of SIVmac239, SIVsmE660 and the LTR region of SIVagmSAB were also optimized. Results Modifications of the SIV PVL qPCR procedure increased vRNA recovery, reduced inhibition and improved analytical sensitivity. The PVL values determined by this SIV PVL qPCR correlated with quantification results of SIV-RNA in the same samples using the “industry standard” method of branched-DNA (bDNA) signal amplification. Conclusions Quantification of SIV genomic RNA in plasma of rhesus macaques using this optimized SIV PVL qPCR is equivalent to the bDNA signal amplification method, less costly and more versatile. Use of heterologous aRNA as an internal control is useful for optimizing performance characteristics of PVL qPCRs. PMID:24266615
Monjure, C J; Tatum, C D; Panganiban, A T; Arainga, M; Traina-Dorge, V; Marx, P A; Didier, E S
2014-02-01
Quantification of plasma viral load (PVL) is used to monitor disease progression in SIV-infected macaques. This study was aimed at optimizing of performance characteristics of the quantitative PCR (qPCR) PVL assay. The PVL quantification procedure was optimized by inclusion of an exogenous control hepatitis C virus armored RNA (aRNA), a plasma concentration step, extended digestion with proteinase K, and a second RNA elution step. Efficiency of viral RNA (vRNA) extraction was compared using several commercial vRNA extraction kits. Various parameters of qPCR targeting the gag region of SIVmac239, SIVsmE660, and the LTR region of SIVagmSAB were also optimized. Modifications of the SIV PVL qPCR procedure increased vRNA recovery, reduced inhibition and improved analytical sensitivity. The PVL values determined by this SIV PVL qPCR correlated with quantification results of SIV RNA in the same samples using the 'industry standard' method of branched-DNA (bDNA) signal amplification. Quantification of SIV genomic RNA in plasma of rhesus macaques using this optimized SIV PVL qPCR is equivalent to the bDNA signal amplification method, less costly and more versatile. Use of heterologous aRNA as an internal control is useful for optimizing performance characteristics of PVL qPCRs. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Optimal wavelength selection for noncontact reflection photoplethysmography
NASA Astrophysics Data System (ADS)
Corral Martinez, Luis F.; Paez, Gonzalo; Strojnik, Marija
2011-08-01
In this work, we obtain backscattered signals from human forehead for wavelengths from 380 to 980 nm. The results reveal bands with strong pulsatile signals that carry useful information. We describe those bands as the most suitable wavelengths in the visible and NIR regions from which heart and respiratory rate parameters can be derived using long distance non-contact reflection photoplethysmography analysis. The latter results show the feasibility of a novel technique for remotely detection of vital signs in humans. This technique, which may include morphological analysis or maps of tissue oxygenation, is a further step to real non-invasive remote monitoring of patients.
Park, Hyun; Yoon, Sang-Wook; Sokolov, Amit
2015-12-01
Magnetic Resonance-guided Focused Ultrasound Surgery (MRgFUS) is a non-invasive method to treat uterine fibroids. To help determine the patient suitability for MRgFUS, we propose a new objective measure: the scaled signal intensity (SSI) of uterine fibroids in T2 weighted MR images (T2WI). Forty three uterine fibroids in 40 premenopausal women were included in this retrospective study. SSI of each fibroid was measured from the screening T2WI by standardizing its mean signal intensity to a 0-100 scale, using reference intensities of rectus abdominis muscle (0) and subcutaneous fat (100). Correlation between the SSI and the non-perfused volume (NPV) ratio (a measure for treatment success) was calculated. Pre-treatment SSI showed a significant inverse-correlation with post treatment NPV ratio (p < 0.05). When dichotomizing NPV ratio at 45 %, the optimal cut off value of the SSI was found to be 16.0. A fibroid with SSI value 16.0 or less can be expected to have optimal responses. The SSI of uterine fibroids in T2WI can be suggested as an objective parameter to help in patient selection for MRgFUS. • Signal intensity of fibroid in MR images predicts treatment response to MRgFUS. • Signal intensity is standardized into scaled form using adjacent tissues as references. • Fibroids with SSI less than 16.0 are expected to have optimal responses.
NASA Astrophysics Data System (ADS)
Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.
2017-03-01
Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on Taguchi approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical Taguchi approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
All-Optical Implementation of the Ant Colony Optimization Algorithm
Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare
2016-01-01
We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098
A global design of high power Nd 3+-Yb 3+ co-doped fiber lasers
NASA Astrophysics Data System (ADS)
Fan, Zhang; Chuncan, Wang; Tigang, Ning
2008-09-01
A global optimization method - niche hybrid genetic algorithm (NHGA) based on fitness sharing and elite replacement is applied to optimize Nd3+-Yb3+ co-doped fiber lasers (NYDFLs) for obtaining maximum signal output power. With a objective function and different pumping powers, five critical parameters (the fiber length, L; the proportion of pump power for pumping Nd3+, η; Nd3+ and Yb3+ concentrations, NNd and NYb and output mirror reflectivity, Rout) of the given NYDFLs are optimized by solving the rate and power propagation equations. Results show that dividing equally the input pump power among 808 nm (Nd3+) and 940 nm (Yb3+) is not an optimal choice and the pump power of Nd3+ ions should be kept around 10-13.78% of the total pump power. Three optimal schemes are obtained by NHGA and the highest slope efficiency of the laser is able to reach 80.1%.
Contrast research of CDMA and GSM network optimization
NASA Astrophysics Data System (ADS)
Wu, Yanwen; Liu, Zehong; Zhou, Guangyue
2004-03-01
With the development of mobile telecommunication network, users of CDMA advanced their request of network service quality. While the operators also change their network management object from signal coverage to performance improvement. In that case, reasonably layout & optimization of mobile telecommunication network, reasonably configuration of network resource, improvement of the service quality, and increase the enterprise's core competition ability, all those have been concerned by the operator companies. This paper firstly looked into the flow of CDMA network optimization. Then it dissertated to some keystones in the CDMA network optimization, like PN code assignment, calculation of soft handover, etc. As GSM is also the similar cellular mobile telecommunication system like CDMA, so this paper also made a contrast research of CDMA and GSM network optimization in details, including the similarity and the different. In conclusion, network optimization is a long time job; it will run through the whole process of network construct. By the adjustment of network hardware (like BTS equipments, RF systems, etc.) and network software (like parameter optimized, configuration optimized, capacity optimized, etc.), network optimization work can improve the performance and service quality of the network.
NASA Astrophysics Data System (ADS)
Kamiński, K.; Dobrowolski, A. P.
2017-04-01
The paper presents the architecture and the results of optimization of selected elements of the Automatic Speaker Recognition (ASR) system that uses Gaussian Mixture Models (GMM) in the classification process. Optimization was performed on the process of selection of individual characteristics using the genetic algorithm and the parameters of Gaussian distributions used to describe individual voices. The system that was developed was tested in order to evaluate the impact of different compression methods used, among others, in landline, mobile, and VoIP telephony systems, on effectiveness of the speaker identification. Also, the results were presented of effectiveness of speaker identification at specific levels of noise with the speech signal and occurrence of other disturbances that could appear during phone calls, which made it possible to specify the spectrum of applications of the presented ASR system.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
Kent, A R; Grill, W M
2012-01-01
Deep brain stimulation (DBS) is an effective treatment for movement disorders, but the selection of stimulus parameters is a clinical burden and often yields sub-optimal outcomes for patients. Measurement of electrically evoked compound action potentials (ECAPs) during DBS could offer insight into the type and spatial extent of neural element activation and provide a potential feedback signal for the rational selection of stimulus parameters and closed-loop DBS. However, recording ECAPs presents a significant technical challenge due to the large stimulus artefact, which can saturate recording amplifiers and distort short latency ECAP signals. We developed DBS-ECAP recording instrumentation combining commercial amplifiers and circuit elements in a serial configuration to reduce the stimulus artefact and enable high fidelity recording. We used an electrical circuit equivalent model of the instrumentation to understand better the sources of the stimulus artefact and the mechanisms of artefact reduction by the circuit elements. In vitro testing validated the capability of the instrumentation to suppress the stimulus artefact and increase gain by a factor of 1,000 to 5,000 compared to a conventional biopotential amplifier. The distortion of mock ECAP (mECAP) signals was measured across stimulation parameters, and the instrumentation enabled high fidelity recording of mECAPs with latencies of only 0.5 ms for DBS pulse widths of 50 to 100 μs/phase. Subsequently, the instrumentation was used to record in vivo ECAPs, without contamination by the stimulus artefact, during thalamic DBS in an anesthetized cat. The characteristics of the physiological ECAP were dependent on stimulation parameters. The novel instrumentation enables high fidelity ECAP recording and advances the potential use of the ECAP as a feedback signal for the tuning of DBS parameters. PMID:22510375
Combinatorial influence of environmental parameters on transcription factor activity.
Knijnenburg, T A; Wessels, L F A; Reinders, M J T
2008-07-01
Cells receive a wide variety of environmental signals, which are often processed combinatorially to generate specific genetic responses. Changes in transcript levels, as observed across different environmental conditions, can, to a large extent, be attributed to changes in the activity of transcription factors (TFs). However, in unraveling these transcription regulation networks, the actual environmental signals are often not incorporated into the model, simply because they have not been measured. The unquantified heterogeneity of the environmental parameters across microarray experiments frustrates regulatory network inference. We propose an inference algorithm that models the influence of environmental parameters on gene expression. The approach is based on a yeast microarray compendium of chemostat steady-state experiments. Chemostat cultivation enables the accurate control and measurement of many of the key cultivation parameters, such as nutrient concentrations, growth rate and temperature. The observed transcript levels are explained by inferring the activity of TFs in response to combinations of cultivation parameters. The interplay between activated enhancers and repressors that bind a gene promoter determine the possible up- or downregulation of the gene. The model is translated into a linear integer optimization problem. The resulting regulatory network identifies the combinatorial effects of environmental parameters on TF activity and gene expression. The Matlab code is available from the authors upon request. Supplementary data are available at Bioinformatics online.
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
NASA Astrophysics Data System (ADS)
Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.
2018-02-01
The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the optimal process response with a limited number of experimental cycles, multi objective grey relational analysis had been applied for solving a multiple response optimization problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.
Wavelet-bounded empirical mode decomposition for measured time series analysis
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2018-01-01
Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-11-25
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.
Maclean, Brendan; Tomazela, Daniela M; Abbatiello, Susan E; Zhang, Shucha; Whiteaker, Jeffrey R; Paulovich, Amanda G; Carr, Steven A; Maccoss, Michael J
2010-12-15
Proteomics experiments based on Selected Reaction Monitoring (SRM, also referred to as Multiple Reaction Monitoring or MRM) are being used to target large numbers of protein candidates in complex mixtures. At present, instrument parameters are often optimized for each peptide, a time and resource intensive process. Large SRM experiments are greatly facilitated by having the ability to predict MS instrument parameters that work well with the broad diversity of peptides they target. For this reason, we investigated the impact of using simple linear equations to predict the collision energy (CE) on peptide signal intensity and compared it with the empirical optimization of the CE for each peptide and transition individually. Using optimized linear equations, the difference between predicted and empirically derived CE values was found to be an average gain of only 7.8% of total peak area. We also found that existing commonly used linear equations fall short of their potential, and should be recalculated for each charge state and when introducing new instrument platforms. We provide a fully automated pipeline for calculating these equations and individually optimizing CE of each transition on SRM instruments from Agilent, Applied Biosystems, Thermo-Scientific and Waters in the open source Skyline software tool ( http://proteome.gs.washington.edu/software/skyline ).
Optimal exposure techniques for iodinated contrast enhanced breast CT
NASA Astrophysics Data System (ADS)
Glick, Stephen J.; Makeev, Andrey
2016-03-01
Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.
Neonatal non-contact respiratory monitoring based on real-time infrared thermography
2011-01-01
Background Monitoring of vital parameters is an important topic in neonatal daily care. Progress in computational intelligence and medical sensors has facilitated the development of smart bedside monitors that can integrate multiple parameters into a single monitoring system. This paper describes non-contact monitoring of neonatal vital signals based on infrared thermography as a new biomedical engineering application. One signal of clinical interest is the spontaneous respiration rate of the neonate. It will be shown that the respiration rate of neonates can be monitored based on analysis of the anterior naris (nostrils) temperature profile associated with the inspiration and expiration phases successively. Objective The aim of this study is to develop and investigate a new non-contact respiration monitoring modality for neonatal intensive care unit (NICU) using infrared thermography imaging. This development includes subsequent image processing (region of interest (ROI) detection) and optimization. Moreover, it includes further optimization of this non-contact respiration monitoring to be considered as physiological measurement inside NICU wards. Results Continuous wavelet transformation based on Debauches wavelet function was applied to detect the breathing signal within an image stream. Respiration was successfully monitored based on a 0.3°C to 0.5°C temperature difference between the inspiration and expiration phases. Conclusions Although this method has been applied to adults before, this is the first time it was used in a newborn infant population inside the neonatal intensive care unit (NICU). The promising results suggest to include this technology into advanced NICU monitors. PMID:22243660
Using Diurnal Temperature Signals to Infer Vertical Groundwater-Surface Water Exchange.
Irvine, Dylan J; Briggs, Martin A; Lautz, Laura K; Gordon, Ryan P; McKenzie, Jeffrey M; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer. © 2016, National Ground Water Association.
Yang, Xin-an; Zhang, Wang-bing
2013-01-01
A simple and green flow injection chemiluminescence (FI-CL) method for determination of the fungicide azoxystrobin was described for the first time. CL signal was generated when azoxystrobin was injected into a mixed stream of luminol and KMnO4 . The CL signal of azoxystrobin could be greatly improved when an off-line ultrasonic treatment was adopted. Meanwhile, the signal intensity increases with the analyte concentration proportionally. Several variables, such as the ultrasonic parameters, flow rate of reagents, concentrations of sodium hydroxide solution and CL reagents (potassium permanganate, luminol) were investigated, and the optimal CL conditions were obtained. Under optimal conditions, the linear range of 1-100 ng/mL for azoxystrobin was obtained and the detection limit (3σ) was determined as 0.13 ng/mL. The relative standard deviation was 1.5% for 10 consecutive measurements of 20 ng/mL azoxystrobin. The method has been applied to the determination of azoxystrobin residues in water samples. Copyright © 2012 John Wiley & Sons, Ltd.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Determining the Ocean's Role on the Variable Gravity Field and Earth Rotation
NASA Technical Reports Server (NTRS)
Ponte, Rui M.; Frey, H. (Technical Monitor)
2000-01-01
A number of ocean models of different complexity have been used to study changes in the oceanic angular momentum (OAM) and mass fields and their relation to the variable Earth rotation and gravity field. Time scales examined range from seasonal to a few days. Results point to the importance of oceanic signals in driving polar motion, in particular the Chandler and annual wobbles. Results also show that oceanic signals have a measurable impact on length-of-day variations. Various circulation features and associated mass signals, including the North Pacific subtropical gyre, the equatorial currents, and the Antarctic Circumpolar Current play a significant role in oceanic angular momentum variability. The impact on OAM values of an optimization procedure that uses available data to constrain ocean model results was also tested for the first time. The optimization procedure yielded substantial changes, in OAM, related to adjustments in both motion and mass fields,as well as in the wind stress torques acting on the ocean. Constrained OAM values were found to yield noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale.
Abrasive wear response of TIG-melted TiC composite coating: Taguchi approach
NASA Astrophysics Data System (ADS)
Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Dube, A.
2017-03-01
In this study, Taguchi design of experiment approach has been applied to assess wear behaviour of TiC composite coatings deposited on AISI 4340 steel substrates by novel powder preplacement and TIG torch melting processes. To study the abrasive wear behaviour of these coatings against alumina ball at 600° C, a Taguchi’s orthogonal array is used to acquire the wear test data for determining optimal parameters that lead to the minimization of wear rate. Composite coatings are developed based on Taguchi’s L-16 orthogonal array experiment with three process parameters (welding current, welding speed, welding voltage and shielding gas flow rate) at four levels. In this technique, mean response and signal-to-noise ratio are used to evaluate the influence of the TIG process parameters on the wear rate performance of the composite coated surfaces. The results reveal that welding voltage is the most significant control parameter for minimizing wear rate while the current presents the least contribution to the wear rate reduction. The study also shows the best optimal condition has been arrived at A3 (90 A), B4 (2.5 mm/s), C3 (30 V) and D3 (20 L/min), which gives minimum wear rate in TiC embedded coatings. Finally, a confirmatory experiment has been conducted to verify the optimized result and shows that the error between the predicted values and the experimental observation at the optimal condition lies within the limit of 4.7 %. Thus, the validity of the optimum condition for the coatings is established.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Performance of field-emitting resonating carbon nanotubes as radio-frequency demodulators
NASA Astrophysics Data System (ADS)
Vincent, P.; Poncharal, P.; Barois, T.; Perisanu, S.; Gouttenoire, V.; Frachon, H.; Lazarus, A.; de Langre, E.; Minoux, E.; Charles, M.; Ziaei, A.; Guillot, D.; Choueib, M.; Ayari, A.; Purcell, S. T.
2011-04-01
We report on a systematic study of the use of resonating nanotubes in a field emission (FE) configuration to demodulate radio frequency signals. We particularly concentrate on how the demodulation depends on the variation of the field amplification factor during resonance. Analytical formulas describing the demodulation are derived as functions of the system parameters. Experiments using AM and FM demodulations in a transmission electron microscope are also presented with a determination of all the pertinent experimental parameters. Finally we discuss the use of CNTs undergoing FE as nanoantennae and the different geometries that could be used for optimization and implementation.
Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation
NASA Astrophysics Data System (ADS)
Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee
2018-05-01
In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.
Evaluation of optimal DNA staining for triggering by scanning fluorescence microscopy (SFM)
NASA Astrophysics Data System (ADS)
Mittag, Anja; Marecka, Monika; Pierzchalski, Arkadiusz; Malkusch, Wolf; Bocsi, József; Tárnok, Attila
2009-02-01
In imaging and flow cytometry, DNA staining is a common trigger signal for cell identification. Selection of the proper DNA dye is restricted by the hardware configuration of the instrument. The Zeiss Imaging Solutions GmbH (München, Germany) introduced a new automated scanning fluorescence microscope - SFM (Axio Imager.Z1) which combines fluorescence imaging with cytometric parameters measurement. The aim of the study was to select optimal DNA dyes as trigger signal in leukocyte detection and subsequent cytometric analysis of double-labeled leukocytes by SFM. Seven DNA dyes (DAPI, Hoechst 33258, Hoechst 33342, POPO-3, PI, 7-AAD, and TOPRO-3) were tested and found to be suitable for the implemented filtersets (fs) of the SFM (fs: 49, fs: 44, fs: 20). EDTA blood was stained after erythrocyte lysis with DNA dye. Cells were transferred on microscopic slides and embedded in fluorescent mounting medium. Quality of DNA fluorescence signal as well as spillover signals were analyzed by SFM. CD45-APC and CD3-PE as well as CD4-FITC and CD8-APC were selected for immunophenotyping and used in combination with Hoechst. Within the tested DNA dyes DAPI showed relatively low spillover and the best CV value. Due to the low spillover of UV DNA dyes a triple staining of Hoechst and APC and PE (or APC and FITC, respectively) could be analyzed without difficulty. These results were confirmed by FCM measurements. DNA fluorescence is applicable for identifying and triggering leukocytes in SFM analyses. Although some DNA dyes exhibit strong spillover in other fluorescence channels, it was possible to immunophenotype leukocytes. DAPI seems to be best suitable for use in the SFM system and will be used in protocol setups as primary parameter.
Basha, Tamer A; Tang, Maxine C; Tsao, Connie; Tschabrunn, Cory M; Anter, Elad; Manning, Warren J; Nezafat, Reza
2018-01-01
To develop a dark blood-late gadolinium enhancement (DB-LGE) sequence that improves scar-blood contrast and delineation of scar region. The DB-LGE sequence uses an inversion pulse followed by T 2 magnetization preparation to suppress blood and normal myocardium. Time delays inserted after preparation pulses and T 2 -magnetization-prep duration are used to adjust tissue contrast. Selection of these parameters was optimized using numerical simulations and phantom experiments. We evaluated DB-LGE in 9 swine and 42 patients (56 ± 14 years, 33 male). Improvement in scar-blood contrast and overall image quality was subjectively evaluated by two independent readers (1 = poor, 4 = excellent). The signal ratios among scar, blood, and myocardium were compared. Simulations and phantom studies demonstrated that simultaneous nulling of myocardium and blood can be achieved by selecting appropriate timing parameters. The scar-blood contrast score was significantly higher for DB-LGE (P < 0.001) with no significant difference in overall image quality (P > 0.05). Scar-blood signal ratios for DB-LGE versus LGE were 5.0 ± 2.8 versus 1.5 ± 0.5 (P < 0.001) for patients, and 2.2 ± 0.7 versus 1.0 ± 0.4 (P = 0.0023) for animals. Scar-myocardium signal ratios were 5.7 ± 2.9 versus 6.3 ± 2.6 (P = 0.35) for patients, and 3.7 ± 1.1 versus 4.1 ± 2.0 (P = 0.60) for swine. The DB-LGE sequence simultaneously reduces normal myocardium and blood signal intensity, thereby enhancing scar-blood contrast while preserving scar-myocardium contrast. Magn Reson Med 79:351-360, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
A computer controlled signal preprocessor for laser fringe anemometer applications
NASA Technical Reports Server (NTRS)
Oberle, Lawrence G.
1987-01-01
The operation of most commercially available laser fringe anemometer (LFA) counter-processors assumes that adjustments are made to the signal processing independent of the computer used for reducing the data acquired. Not only does the researcher desire a record of these parameters attached to the data acquired, but changes in flow conditions generally require that these settings be changed to improve data quality. Because of this limitation, on-line modification of the data acquisition parameters can be difficult and time consuming. A computer-controlled signal preprocessor has been developed which makes possible this optimization of the photomultiplier signal as a normal part of the data acquisition process. It allows computer control of the filter selection, signal gain, and photo-multiplier voltage. The raw signal from the photomultiplier tube is input to the preprocessor which, under the control of a digital computer, filters the signal and amplifies it to an acceptable level. The counter-processor used at Lewis Research Center generates the particle interarrival times, as well as the time-of-flight of the particle through the probe volume. The signal preprocessor allows computer control of the acquisition of these data.Through the preprocessor, the computer also can control the hand shaking signals for the interface between itself and the counter-processor. Finally, the signal preprocessor splits the pedestal from the signal before filtering, and monitors the photo-multiplier dc current, sends a signal proportional to this current to the computer through an analog to digital converter, and provides an alarm if the current exceeds a predefined maximum. Complete drawings and explanations are provided in the text as well as a sample interface program for use with the data acquisition software.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
Investigations of calcium spectral lines in laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Ching, Sim Yit; Tariq, Usman; Haider, Zuhaib; Tufail, Kashif; Sabri, Salwanie; Imran, Muhammad; Ali, Jalil
2017-03-01
Laser-induced breakdown spectroscopy (LIBS) is a direct and versatile analytical technique that performs the elemental composition analysis based on optical emission produced by laser induced-plasma, with a little or no sample preparation. The performance of the LIBS technique relies on the choice of experimental conditions which must be thoroughly explored and optimized for each application. The main parameters affecting the LIBS performance are the laser energy, laser wavelength, pulse duration, gate delay, geometrical set-up of the focusing and collecting optics. In LIBS quantitative analysis, the gate delay and laser energy are very important parameters that have pronounced impact on the accuracy of the elemental composition information of the materials. The determination of calcium elements in the pelletized samples was investigated and served for the purpose of optimizing the gate delay and laser energy by studying and analyzing the results from emission intensities collected and signal to background ratio (S/B) for the specified wavelengths.
Automated Design of Complex Dynamic Systems
Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni
2014-01-01
Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969
Dana, Saswati; Nakakuki, Takashi; Hatakeyama, Mariko; Kimura, Shuhei; Raha, Soumyendu
2011-01-01
Mutation and/or dysfunction of signaling proteins in the mitogen activated protein kinase (MAPK) signal transduction pathway are frequently observed in various kinds of human cancer. Consistent with this fact, in the present study, we experimentally observe that the epidermal growth factor (EGF) induced activation profile of MAP kinase signaling is not straightforward dose-dependent in the PC3 prostate cancer cells. To find out what parameters and reactions in the pathway are involved in this departure from the normal dose-dependency, a model-based pathway analysis is performed. The pathway is mathematically modeled with 28 rate equations yielding those many ordinary differential equations (ODE) with kinetic rate constants that have been reported to take random values in the existing literature. This has led to us treating the ODE model of the pathways kinetics as a random differential equations (RDE) system in which the parameters are random variables. We show that our RDE model captures the uncertainty in the kinetic rate constants as seen in the behavior of the experimental data and more importantly, upon simulation, exhibits the abnormal EGF dose-dependency of the activation profile of MAP kinase signaling in PC3 prostate cancer cells. The most likely set of values of the kinetic rate constants obtained from fitting the RDE model into the experimental data is then used in a direct transcription based dynamic optimization method for computing the changes needed in these kinetic rate constant values for the restoration of the normal EGF dose response. The last computation identifies the parameters, i.e., the kinetic rate constants in the RDE model, that are the most sensitive to the change in the EGF dose response behavior in the PC3 prostate cancer cells. The reactions in which these most sensitive parameters participate emerge as candidate drug targets on the signaling pathway. 2011 Elsevier Ireland Ltd. All rights reserved.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Increasing spatial resolution and comparison of MR imaging sequences for the inner ear
NASA Astrophysics Data System (ADS)
Snyder, Carl J.; Bolinger, Lizann; Rubinstein, Jay T.; Wang, Ge
2002-04-01
The size and location of the cochlea and cochlear nerve are needed to assess the feasibility of cochlea implantation, provide information for surgical planning, and aid in construction of cochlear models. Models of implant stimulation incorporating anatomical and physiological information are likely to provide a better understanding of the biophysics of information transferred with cochlear implants and aid in electrode design and arrangement on cochlear implants. Until recently MR did not provide the necessary image resolution and suffered from long acquisition times. The purpose of this study was to optimize both Fast Spin Echo (FSE) and Steady State Free Precession (FIESTA) imaging scan parameters for the inner ear and comparatively examine both for improved image quality and increased spatial resolution. Image quality was determined by two primary measurements, signal to noise ratio (SNR), and image sharpness. Optimized parameters for FSE were 120ms, 3000ms, 64, and 32.25kHz for the TE, TR, echo train length, and bandwidth, respectively. FIESTA parameters were optimized to 2.7, 5.5ms, 70 degree(s), and 62.5kHz, for TE, TR, flip angle, and bandwidth, respectively. While both had the same in-plane spatial resolution, 0.625mm, FIESTA data shows higher SNR per acquisition time and better edge sharpness.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
Van Steenkiste, Gwendolyn; Jeurissen, Ben; Veraart, Jelle; den Dekker, Arnold J; Parizel, Paul M; Poot, Dirk H J; Sijbers, Jan
2016-01-01
Diffusion MRI is hampered by long acquisition times, low spatial resolution, and a low signal-to-noise ratio. Recently, methods have been proposed to improve the trade-off between spatial resolution, signal-to-noise ratio, and acquisition time of diffusion-weighted images via super-resolution reconstruction (SRR) techniques. However, during the reconstruction, these SRR methods neglect the q-space relation between the different diffusion-weighted images. An SRR method that includes a diffusion model and directly reconstructs high resolution diffusion parameters from a set of low resolution diffusion-weighted images was proposed. Our method allows an arbitrary combination of diffusion gradient directions and slice orientations for the low resolution diffusion-weighted images, optimally samples the q- and k-space, and performs motion correction with b-matrix rotation. Experiments with synthetic data and in vivo human brain data show an increase of spatial resolution of the diffusion parameters, while preserving a high signal-to-noise ratio and low scan time. Moreover, the proposed SRR method outperforms the previous methods in terms of the root-mean-square error. The proposed SRR method substantially increases the spatial resolution of MRI that can be obtained in a clinically feasible scan time. © 2015 Wiley Periodicals, Inc.
Hoff, Michael N; Andre, Jalal B; Xiang, Qing-San
2017-02-01
Balanced steady state free precession (bSSFP) imaging suffers from off-resonance artifacts such as signal modulation and banding. Solutions for removal of bSSFP off-resonance dependence are described and compared, and an optimal solution is proposed. An Algebraic Solution (AS) that complements a previously described Geometric Solution (GS) is derived from four phase-cycled bSSFP datasets. A composite Geometric-Algebraic Solution (GAS) is formed from a noise-variance-weighted average of the AS and GS images. Two simulations test the solutions over a range of parameters, and phantom and in vivo experiments are implemented. Image quality and performance of the GS, AS, and GAS are compared with the complex sum and a numerical parameter estimation algorithm. The parameter estimation algorithm, GS, AS, and GAS remove most banding and signal modulation in bSSFP imaging. The variable performance of the GS and AS on noisy data justifies generation of the GAS, which consistently provides the highest performance. The GAS is a robust technique for bSSFP signal demodulation that balances the regional efficacy of the GS and AS to remove banding, a feat not possible with prevalent techniques. Magn Reson Med 77:644-654, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Koen, Joshua D; Barrett, Frederick S; Harlow, Iain M; Yonelinas, Andrew P
2017-08-01
Signal-detection theory, and the analysis of receiver-operating characteristics (ROCs), has played a critical role in the development of theories of episodic memory and perception. The purpose of the current paper is to present the ROC Toolbox. This toolbox is a set of functions written in the Matlab programming language that can be used to fit various common signal detection models to ROC data obtained from confidence rating experiments. The goals for developing the ROC Toolbox were to create a tool (1) that is easy to use and easy for researchers to implement with their own data, (2) that can flexibly define models based on varying study parameters, such as the number of response options (e.g., confidence ratings) and experimental conditions, and (3) that provides optimal routines (e.g., Maximum Likelihood estimation) to obtain parameter estimates and numerous goodness-of-fit measures.The ROC toolbox allows for various different confidence scales and currently includes the models commonly used in recognition memory and perception: (1) the unequal variance signal detection (UVSD) model, (2) the dual process signal detection (DPSD) model, and (3) the mixture signal detection (MSD) model. For each model fit to a given data set the ROC toolbox plots summary information about the best fitting model parameters and various goodness-of-fit measures. Here, we present an overview of the ROC Toolbox, illustrate how it can be used to input and analyse real data, and finish with a brief discussion on features that can be added to the toolbox.
Scintillator tiles read out with silicon photomultipliers
NASA Astrophysics Data System (ADS)
Pooth, O.; Radermacher, T.; Weingarten, S.; Weinstock, L.
2015-10-01
A detector prototype based on a fast plastic scintillator read out with silicon photomultipliers is presented. All studies have been done with cosmic muons and focus on parameter optimization such as coupling the SiPM to the scintillator or wrapping the scintillator with reflective material. The prototype shows excellent results regarding the light-yield and offers a detection efficiency of 99.5% with a signal purity of 99.9% for cosmic muons.
Spread-Spectrum Communications.
1984-08-07
Articles M. B. Parsley and H. F. A. Roefs, "Numerical evaluation of correlation parameters for optimal phases of binary shift-register sequences," IEEE...Transactions on Communications, Vol. COM-27, pp. 1597-1604, October 1979. D. V. Sarwate and M. B. Parsley , "Crcuecorrehation proets Of psuoadmand related...Signal Processing, Vol. 128, pp. 104-109, April 1981. * M. B. Parsley , D. V. Sarwate, and W. E. Stark, ’Error probability for direct-sequence spread
Kim, Keonwook
2013-01-01
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482
NASA Astrophysics Data System (ADS)
Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan
2018-03-01
In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.
Optimisation of wavelength modulated Raman spectroscopy: towards high throughput cell screening.
Praveen, Bavishna B; Mazilu, Michael; Marchington, Robert F; Herrington, C Simon; Riches, Andrew; Dholakia, Kishan
2013-01-01
In the field of biomedicine, Raman spectroscopy is a powerful technique to discriminate between normal and cancerous cells. However the strong background signal from the sample and the instrumentation affects the efficiency of this discrimination technique. Wavelength Modulated Raman spectroscopy (WMRS) may suppress the background from the Raman spectra. In this study we demonstrate a systematic approach for optimizing the various parameters of WMRS to achieve a reduction in the acquisition time for potential applications such as higher throughput cell screening. The Signal to Noise Ratio (SNR) of the Raman bands depends on the modulation amplitude, time constant and total acquisition time. It was observed that the sampling rate does not influence the signal to noise ratio of the Raman bands if three or more wavelengths are sampled. With these optimised WMRS parameters, we increased the throughput in the binary classification of normal human urothelial cells and bladder cancer cells by reducing the total acquisition time to 6 s which is significantly lower in comparison to previous acquisition times required for the discrimination between similar cell types.
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-01
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice. PMID:25594592
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-14
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.
Dynamic optimization of open-loop input signals for ramp-up current profiles in tokamak plasmas
NASA Astrophysics Data System (ADS)
Ren, Zhigang; Xu, Chao; Lin, Qun; Loxton, Ryan; Teo, Kok Lay
2016-03-01
Establishing a good current spatial profile in tokamak fusion reactors is crucial to effective steady-state operation. The evolution of the current spatial profile is related to the evolution of the poloidal magnetic flux, which can be modeled in the normalized cylindrical coordinates using a parabolic partial differential equation (PDE) called the magnetic diffusion equation. In this paper, we consider the dynamic optimization problem of attaining the best possible current spatial profile during the ramp-up phase of the tokamak. We first use the Galerkin method to obtain a finite-dimensional ordinary differential equation (ODE) model based on the original magnetic diffusion PDE. Then, we combine the control parameterization method with a novel time-scaling transformation to obtain an approximate optimal parameter selection problem, which can be solved using gradient-based optimization techniques such as sequential quadratic programming (SQP). This control parameterization approach involves approximating the tokamak input signals by piecewise-linear functions whose slopes and break-points are decision variables to be optimized. We show that the gradient of the objective function with respect to the decision variables can be computed by solving an auxiliary dynamic system governing the state sensitivity matrix. Finally, we conclude the paper with simulation results for an example problem based on experimental data from the DIII-D tokamak in San Diego, California.
NASA Astrophysics Data System (ADS)
Golovanova, T. M.; Gryaznov, Yu M.; Dianov, Evgenii M.; Dobryakova, N. G.; Kiselev, A. V.; Prokhorov, A. M.; Shcherbakov, E. A.
1989-08-01
An investigation was made of the parameters of an integrated-optical spectrum analyzer consisting of a Ti:LiNbO3 crystal and a semiconductor laser with a built-in microobjective, spherical geodesic lenses, and an optimized system of interdigital (opposed-comb) transducers. The characteristics of this spectrum analyzer were as follows: the band of operating frequencies was 181 MHz (at the 3 dB level); the resolution was 2.8 MHz; the signal/noise ratio (under a control voltage of 4 V) was 20 dB.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
Arita, Chikashi; Foulaadvand, M Ebrahim; Santen, Ludger
2017-03-01
We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Foulaadvand, M. Ebrahim; Santen, Ludger
2017-03-01
We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.
Nakata, Toshihiko; Ninomiya, Takanori
2006-10-10
A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.
Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system
NASA Astrophysics Data System (ADS)
Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.
2005-04-01
We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
Taguchi Optimization of Cutting Parameters in Turning AISI 1020 MS with M2 HSS Tool
NASA Astrophysics Data System (ADS)
Sonowal, Dharindom; Sarma, Dhrupad; Bakul Barua, Parimal; Nath, Thuleswar
2017-08-01
In this paper the effect of three cutting parameters viz. Spindle speed, Feed and Depth of Cut on surface roughness of AISI 1020 mild steel bar in turning was investigated and optimized to obtain minimum surface roughness. All the experiments are conducted on HMT LB25 lathe machine using M2 HSS cutting tool. Ranges of parameters of interest have been decided through some preliminary experimentation (One Factor At a Time experiments). Finally a combined experiment has been carried out using Taguchi’s L27 Orthogonal Array (OA) to study the main effect and interaction effect of the all three parameters. The experimental results were analyzed with raw data ANOVA (Analysis of Variance) and S/N data (Signal to Noise ratio) ANOVA. Results show that Spindle speed, Feed and Depth of Cut have significant effects on both mean and variation of surface roughness in turning AISI 1020 mild steel. Mild two factors interactions are observed among the aforesaid factors with significant effects only on the mean of the output variable. From the Taguchi parameter optimization the optimum factor combination is found to be 630 rpm spindle speed, 0.05 mm/rev feed and 1.25 mm depth of cut with estimated surface roughness 2.358 ± 0.970 µm. A confirmatory experiment was conducted with the optimum factor combination to verify the results. In the confirmatory experiment the average value of surface roughness is found to be 2.408 µm which is well within the range (0.418 µm to 4.299 µm) predicted for confirmatory experiment.
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-01-01
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox. PMID:26848665
Mechanistic and quantitative insight into cell surface targeted molecular imaging agent design.
Zhang, Liang; Bhatnagar, Sumit; Deschenes, Emily; Thurber, Greg M
2016-05-05
Molecular imaging agent design involves simultaneously optimizing multiple probe properties. While several desired characteristics are straightforward, including high affinity and low non-specific background signal, in practice there are quantitative trade-offs between these properties. These include plasma clearance, where fast clearance lowers background signal but can reduce target uptake, and binding, where high affinity compounds sometimes suffer from lower stability or increased non-specific interactions. Further complicating probe development, many of the optimal parameters vary depending on both target tissue and imaging agent properties, making empirical approaches or previous experience difficult to translate. Here, we focus on low molecular weight compounds targeting extracellular receptors, which have some of the highest contrast values for imaging agents. We use a mechanistic approach to provide a quantitative framework for weighing trade-offs between molecules. Our results show that specific target uptake is well-described by quantitative simulations for a variety of targeting agents, whereas non-specific background signal is more difficult to predict. Two in vitro experimental methods for estimating background signal in vivo are compared - non-specific cellular uptake and plasma protein binding. Together, these data provide a quantitative method to guide probe design and focus animal work for more cost-effective and time-efficient development of molecular imaging agents.
Rouseff, Daniel; Badiey, Mohsen; Song, Aijun
2009-11-01
The performance of a communications equalizer is quantified in terms of the number of acoustic paths that are treated as usable signal. The analysis uses acoustical and oceanographic data collected off the Hawaiian Island of Kauai. Communication signals were measured on an eight-element vertical array at two different ranges, 1 and 2 km, and processed using an equalizer based on passive time-reversal signal processing. By estimating the Rayleigh parameter, it is shown that all paths reflected by the sea surface at both ranges undergo incoherent scattering. It is demonstrated that some of these incoherently scattered paths are still useful for coherent communications. At range of 1 km, optimal communications performance is achieved when six acoustic paths are retained and all paths with more than one reflection off the sea surface are rejected. Consistent with a model that ignores loss from near-surface bubbles, the performance improves by approximately 1.8 dB when increasing the number of retained paths from four to six. The four-path results though are more stable and require less frequent channel estimation. At range of 2 km, ray refraction is observed and communications performance is optimal when some paths with two sea-surface reflections are retained.
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine.
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-02-02
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox.
NASA Astrophysics Data System (ADS)
Rajkumar, Goribidanur Rangappa; Krishna, Munishamaih; Narasimhamurthy, Hebbale Narayanrao; Keshavamurthy, Yalanabhalli Channegowda
2017-06-01
The objective of the work was to optimize sheet metal joining parameters such as adhesive material, adhesive thickness, adhesive overlap length and surface roughness for single lap joint of aluminium sheet shear strength using robust design. An orthogonal array, main effect plot, signal-to-noise ratio and analysis of variance were employed to investigate the shear strength of the joints. The statistical result shows vinyl ester is best candidate among other two polymers viz. epoxy and polyester due to its low viscosity value compared to other two polymers. The experiment results shows that the adhesive thickness 0.6 mm, overlap length 50 mm and surface roughness 2.12 µm for obtained maximum shear strength of Al sheet joints. The ANOVA result shows one of the most significant factors is overlap length which affect joint strength in addition to adhesive thickness, adhesive material, and surface roughness. A confirmation test was carried out as the optimal combination of parameters will not match with the any of the experiments in the orthogonal array.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile
2017-03-01
Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.
Implementation of a numerical holding furnace model in foundry and construction of a reduced model
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2016-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.
Kesner, Adam Leon; Kuntner, Claudia
2010-10-01
Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.
Chen, Wentao; Zhang, Weidong
2009-10-01
In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2015-09-01
An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1974-01-01
Weight and cost optimized EOS communication links are determined for 2.25, 7.25, 14.5, 21, and 60 GHz systems and for a 10.6 micron homodyne detection laser system. EOS to ground links are examined for 556, 834, and 1112 km EOS orbits, with ground terminals at the Network Test and Tracking Facility and at Goldstone. Optimized 21 GHz and 10.6 micron links are also examined. For the EOS to Tracking and Data Relay Satellite to ground link, signal-to-noise ratios of the uplink and downlink are also optimized for minimum overall cost or spaceborne weight. Finally, the optimized 21 GHz EOS to ground link is determined for various precipitation rates. All system performance parameters and mission dependent constraints are presented, as are the system cost and weight functional dependencies. The features and capabilities of the computer program to perform the foregoing analyses are described.
Intelligent control for PMSM based on online PSO considering parameters change
NASA Astrophysics Data System (ADS)
Song, Zhengqiang; Yang, Huiling
2018-03-01
A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.
Predictive optimized adaptive PSS in a single machine infinite bus.
Milla, Freddy; Duarte-Mermoud, Manuel A
2016-07-01
Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
3D equilibrium reconstruction with islands
NASA Astrophysics Data System (ADS)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.
2018-04-01
This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).
Optimal networks of future gravitational-wave telescopes
NASA Astrophysics Data System (ADS)
Raffai, Péter; Gondán, László; Heng, Ik Siong; Kelecsényi, Nándor; Logue, Josh; Márka, Zsuzsa; Márka, Szabolcs
2013-08-01
We aim to find the optimal site locations for a hypothetical network of 1-3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ˜58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms).
ECG denoising with adaptive bionic wavelet transform.
Sayadi, Omid; Shamsollahi, Mohammad Bagher
2006-01-01
In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.
NASA Astrophysics Data System (ADS)
El Mountassir, M.; Yaacoubi, S.; Dahmene, F.
2015-07-01
Intelligent feature extraction and advanced signal processing techniques are necessary for a better interpretation of ultrasonic guided waves signals either in structural health monitoring (SHM) or in nondestructive testing (NDT). Such signals are characterized by at least multi-modal and dispersive components. In addition, in SHM, these signals are closely vulnerable to environmental and operational conditions (EOCs), and can be severely affected. In this paper we investigate the use of Artificial Neural Network (ANN) to overcome these effects and to provide a reliable damage detection method with a minimal of false indications. An experimental case of study (full scale pipe) is presented. Damages sizes have been increased and their shapes modified in different steps. Various parameters such as the number of inputs and the number of hidden neurons were studied to find the optimal configuration of the neural network.
NASA Astrophysics Data System (ADS)
Böning, Guido; Todica, Andrei; Vai, Alessandro; Lehner, Sebastian; Xiong, Guoming; Mille, Erik; Ilhan, Harun; la Fougère, Christian; Bartenstein, Peter; Hacker, Marcus
2013-11-01
The assessment of left ventricular function, wall motion and myocardial viability using electrocardiogram (ECG)-gated [18F]-FDG positron emission tomography (PET) is widely accepted in human and in preclinical small animal studies. The nonterminal and noninvasive approach permits repeated in vivo evaluations of the same animal, facilitating the assessment of temporal changes in disease or therapy response. Although well established, gated small animal PET studies can contain erroneous gating information, which may yield to blurred images and false estimation of functional parameters. In this work, we present quantitative and visual quality control (QC) methods to evaluate the accuracy of trigger events in PET list-mode and physiological data. Left ventricular functional analysis is performed to quantify the effect of gating errors on the end-systolic and end-diastolic volumes, and on the ejection fraction (EF). We aim to recover the cardiac functional parameters by the application of the commonly established heart rate filter approach using fixed ranges based on a standardized population. In addition, we propose a fully reprocessing approach which retrospectively replaces the gating information of the PET list-mode file with appropriate list-mode decoding and encoding software. The signal of a simultaneously acquired ECG is processed using standard MATLAB vector functions, which can be individually adapted to reliably detect the R-peaks. Finally, the new trigger events are inserted into the PET list-mode file. A population of 30 mice with various health statuses was analyzed and standard cardiac parameters such as mean heart rate (119 ms ± 11.8 ms) and mean heart rate variability (1.7 ms ± 3.4 ms) derived. These standard parameter ranges were taken into account in the QC methods to select a group of nine optimal gated and a group of eight sub-optimal gated [18F]-FDG PET scans of mice from our archive. From the list-mode files of the optimal gated group, we randomly deleted various fractions (5% to 60%) of contained trigger events to generate a corrupted group. The filter approach was capable to correct the corrupted group and yield functional parameters with no significant difference to the optimal gated group. We successfully demonstrated the potential of the fully reprocessing approach by applying it to the sub-optimal group, where the functional parameters were significantly improved after reprocessing (mean EF from 41% ± 16% to 60% ± 13%). When applied to the optimal gated group the fully reprocessing approach did not alter the functional parameters significantly (mean EF from 64% ± 8% to 64 ± 7%). This work presents methods to determine and quantify erroneous gating in small animal gated [18F]-FDG PET scans. We demonstrate the importance of a quality check for cardiac triggering contained in PET list-mode data and the benefit of optionally reprocessing the fully recorded physiological information to retrospectively modify or fully replace the cardiac triggering in PET list-mode data. We aim to provide a preliminary guideline of how to proceed in the presence of errors and demonstrate that offline reprocessing by filtering erroneous trigger events and retrospective gating by ECG processing is feasible. Future work will focus on the extension by additional QC methods, which may exploit the amplitude of trigger events and ECG signal by means of pattern recognition. Furthermore, we aim to transfer the proposed QC methods and the fully reprocessing approach to human myocardial PET/CT.
Combinatorial influence of environmental parameters on transcription factor activity
Knijnenburg, T.A.; Wessels, L.F.A.; Reinders, M.J.T.
2008-01-01
Motivation: Cells receive a wide variety of environmental signals, which are often processed combinatorially to generate specific genetic responses. Changes in transcript levels, as observed across different environmental conditions, can, to a large extent, be attributed to changes in the activity of transcription factors (TFs). However, in unraveling these transcription regulation networks, the actual environmental signals are often not incorporated into the model, simply because they have not been measured. The unquantified heterogeneity of the environmental parameters across microarray experiments frustrates regulatory network inference. Results: We propose an inference algorithm that models the influence of environmental parameters on gene expression. The approach is based on a yeast microarray compendium of chemostat steady-state experiments. Chemostat cultivation enables the accurate control and measurement of many of the key cultivation parameters, such as nutrient concentrations, growth rate and temperature. The observed transcript levels are explained by inferring the activity of TFs in response to combinations of cultivation parameters. The interplay between activated enhancers and repressors that bind a gene promoter determine the possible up- or downregulation of the gene. The model is translated into a linear integer optimization problem. The resulting regulatory network identifies the combinatorial effects of environmental parameters on TF activity and gene expression. Availability: The Matlab code is available from the authors upon request. Contact: t.a.knijnenburg@tudelft.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18586711
Extracting harmonic signal from a chaotic background with local linear model
NASA Astrophysics Data System (ADS)
Li, Chenlong; Su, Liyun
2017-02-01
In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.
The structure and photocatalytic activity of TiO2 thin films deposited by dc magnetron sputtering
NASA Astrophysics Data System (ADS)
Yang, W. J.; Hsu, C. Y.; Liu, Y. W.; Hsu, R. Q.; Lu, T. W.; Hu, C. C.
2012-12-01
This paper seeks to determine the optimal settings for the deposition parameters, for TiO2 thin film, prepared on non-alkali glass substrates, by direct current (dc) sputtering, using a ceramic TiO2 target in an argon gas environment. An orthogonal array, the signal-to-noise ratio and analysis of variance are used to analyze the effect of the deposition parameters. Using the Taguchi method for design of a robust experiment, the interactions between factors are also investigated. The main deposition parameters, such as dc power (W), sputtering pressure (Pa), substrate temperature (°C) and deposition time (min), were optimized, with reference to the structure and photocatalytic characteristics of TiO2. The results of this study show that substrate temperature and deposition time have the most significant effect on photocatalytic performance. For the optimal combination of deposition parameters, the (1 1 0) and (2 0 0) peaks of the rutile structure and the (2 0 0) peak of the anatase structure were observed, at 2θ ˜ 27.4°, 39.2° and 48°, respectively. The experimental results illustrate that the Taguchi method allowed a suitable solution to the problem, with the minimum number of trials, compared to a full factorial design. The adhesion of the coatings was also measured and evaluated, via a scratch test. Superior wear behavior was observed, for the TiO2 film, because of the increased strength of the interface of micro-blasted tools.
Dynamical modeling and multi-experiment fitting with PottersWheel
Maiwald, Thomas; Timmer, Jens
2008-01-01
Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles
NASA Astrophysics Data System (ADS)
Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.
2018-03-01
A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.
Fuzzy logic control and optimization system
Lou, Xinsheng [West Hartford, CT
2012-04-17
A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Exploring silver as a contrast agent for contrast-enhanced dual-energy X-ray breast imaging
Tsourkas, A; Maidment, A D A
2014-01-01
Objective: Through prior monoenergetic modelling, we have identified silver as a potential alternative to iodine in dual-energy (DE) X-ray breast imaging. The purpose of this study was to compare the performance of silver and iodine contrast agents in a commercially available DE imaging system through a quantitative analysis of signal difference-to-noise ratio (SDNR). Methods: A polyenergetic simulation algorithm was developed to model the signal intensity and noise. The model identified the influence of various technique parameters on SDNR. The model was also used to identify the optimal imaging techniques for silver and iodine, so that the two contrast materials could be objectively compared. Results: The major influences on the SDNR were the low-energy dose fraction and breast thickness. An increase in the value of either of these parameters resulted in a decrease in SDNR. The SDNR for silver was on average 43% higher than that for iodine when imaged at their respective optimal conditions, and 40% higher when both were imaged at the optimal conditions for iodine. Conclusion: A silver contrast agent should provide benefit over iodine, even when translated to the clinic without modification of imaging system or protocol. If the system were slightly modified to reflect the lower k-edge of silver, the difference in SDNR between the two materials would be increased. Advances in knowledge: These data are the first to demonstrate the suitability of silver as a contrast material in a clinical contrast-enhanced DE image acquisition system. PMID:24998157
Yang, Anxiong; Berry, David A; Kaltenbacher, Manfred; Döllinger, Michael
2012-02-01
The human voice signal originates from the vibrations of the two vocal folds within the larynx. The interactions of several intrinsic laryngeal muscles adduct and shape the vocal folds to facilitate vibration in response to airflow. Three-dimensional vocal fold dynamics are extracted from in vitro hemilarynx experiments and fitted by a numerical three-dimensional-multi-mass-model (3DM) using an optimization procedure. In this work, the 3DM dynamics are optimized over 24 experimental data sets to estimate biomechanical vocal fold properties during phonation. Accuracy of the optimization is verified by low normalized error (0.13 ± 0.02), high correlation (83% ± 2%), and reproducible subglottal pressure values. The optimized, 3DM parameters yielded biomechanical variations in tissue properties along the vocal fold surface, including variations in both the local mass and stiffness of vocal folds. That is, both mass and stiffness increased along the superior-to-inferior direction. These variations were statistically analyzed under different experimental conditions (e.g., an increase in tension as a function of vocal fold elongation and an increase in stiffness and a decrease in mass as a function of glottal airflow). The study showed that physiologically relevant vocal fold tissue properties, which cannot be directly measured during in vivo human phonation, can be captured using this 3D-modeling technique. © 2012 Acoustical Society of America
Yang, Anxiong; Berry, David A.; Kaltenbacher, Manfred; Döllinger, Michael
2012-01-01
The human voice signal originates from the vibrations of the two vocal folds within the larynx. The interactions of several intrinsic laryngeal muscles adduct and shape the vocal folds to facilitate vibration in response to airflow. Three-dimensional vocal fold dynamics are extracted from in vitro hemilarynx experiments and fitted by a numerical three-dimensional-multi-mass-model (3DM) using an optimization procedure. In this work, the 3DM dynamics are optimized over 24 experimental data sets to estimate biomechanical vocal fold properties during phonation. Accuracy of the optimization is verified by low normalized error (0.13 ± 0.02), high correlation (83% ± 2%), and reproducible subglottal pressure values. The optimized, 3DM parameters yielded biomechanical variations in tissue properties along the vocal fold surface, including variations in both the local mass and stiffness of vocal folds. That is, both mass and stiffness increased along the superior-to-inferior direction. These variations were statistically analyzed under different experimental conditions (e.g., an increase in tension as a function of vocal fold elongation and an increase in stiffness and a decrease in mass as a function of glottal airflow). The study showed that physiologically relevant vocal fold tissue properties, which cannot be directly measured during in vivo human phonation, can be captured using this 3D-modeling technique. PMID:22352511
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
NASA Astrophysics Data System (ADS)
Zhao, Peng; Tao, Jun; Yu, Chang-rui; Li, Ye
2014-02-01
Based on the technology of tunable diode laser absorption spectroscopy, modulation of the center wavelength of 2004 nm distributed feedback laser diode at a room-temperature, the second harmonic amplitude of CO2 at 2004nm can be obtained. The CO2 concentration can be calculated via the Beer-Lambert law. Sinusoidal modulation parameter is an important factor that affects the sensitivity and accuracy of the system, through the research on the relationship between sinusoidal modulation signal frequency, amplitude and Second harmonic linetype, we finally achieve the detection limit of 10ppm under 12 m optical path.
Yao, L; Cairney, J M; Zhu, C; Ringer, S P
2011-05-01
This paper details the effects of systematic changes to the experimental parameters for atom probe microscopy of microalloyed steels. We have used assessments of the signal-to-noise ratio (SNR), compositional measurements and field desorption images to establish the optimal instrumental parameters. These corresponded to probing at the lowest possible temperature (down to 20K) with the highest possible pulse fraction (up to 30%). A steel containing a fine dispersion of solute atom clusters was used as an archetype to demonstrate the importance of running the atom probe at optimum conditions. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Orlowska-Kowalska, Teresa; Kaminski, Marcin
2014-01-01
The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.
Ultrasound Current Source Density Imaging in live rabbit hearts using clinical intracardiac catheter
NASA Astrophysics Data System (ADS)
Li, Qian
Ultrasound Current Source Density Imaging (UCSDI) is a noninvasive modality for mapping electrical activities in the body (brain and heart) in 4-dimensions (space + time). Conventional cardiac mapping technologies for guiding the radiofrequency ablation procedure for treatment of cardiac arrhythmias have certain limitations. UCSDI can potentially overcome these limitations and enhance the electrophysiology mapping of the heart. UCSDI exploits the acoustoelectric (AE) effect, an interaction between ultrasound pressure and electrical resistivity. When an ultrasound beam intersects a current path in a material, the local resistivity of the material is modulated by the ultrasonic pressure, and a change in voltage signal can be detected based on Ohm's Law. The degree of modulation is determined by the AE interaction constant K. K is a fundamental property of any type of material, and directly affects the amplitude of the AE signal detected in UCSDI. UCSDI requires detecting a small AE signal associated with electrocardiogram. So sensitivity becomes a major challenge for transferring UCSDI to the clinic. This dissertation will determine the limits of sensitivity and resolution for UCSDI, balancing the tradeoff between them by finding the optimal parameters for electrical cardiac mapping, and finally test the optimized system in a realistic setting. This work begins by describing a technique for measuring K, the AE interaction constant, in ionic solution and biological tissue, and reporting the value of K in excised rabbit cardiac tissue for the first time. K was found to be strongly dependent on concentration for the divalent salt CuSO4, but not for the monovalent salt NaCl, consistent with their different chemical properties. In the rabbit heart tissue, K was determined to be 0.041 +/- 0.012 %/MPa, similar to the measurement of K in physiologic saline: 0.034 +/- 0.003 %/MPa. Next, this dissertation investigates the sensitivity limit of UCSDI by quantifying the relation between the recording electrode distance and the measured AE signal amplitude in gel phantoms and excised porcine heart tissue using a clinical intracardiac catheter. Sensitivity of UCSDI with catheter was 4.7 microV/mA (R2 = 0.999) in cylindrical gel (0.9% NaCl), and 3.2 microV/mA (R2 = 0.92) in porcine heart tissue. The AE signal was detectable more than 25 mm away from the source in cylindrical gel (0.9% NaCl). Effect of transducer properties on UCSDI sensitivity is also investigated using simulation. The optimal ultrasound transducer parameters chosen for cardiac imaging are center frequency = 0.5 MHz and f/number = 1.4. Last but not least, this dissertation shows the result of implementing the optimized ultrasound parameters in live rabbit heart preparation, the comparison of different recording electrode configuration and multichannel UCSDI recording and reconstruction. The AE signal detected using the 0.5 MHz transducer was much stronger (2.99 microV/MPa) than the 1.0 MHz transducer (0.42 microV/MPa). The clinical lasso catheter placed on the epicardium exhibited excellent sensitivity without being too invasive. 3-dimensional cardiac activation maps of the live rabbit heart using only one pair of recording electrodes were also demonstrated for the first time. Cardiac conduction velocity for atrial (1.31 m/s) and apical (0.67 m/s) pacing were calculated based on the activation maps. The future outlook of this dissertation includes integrating UCSDI with 2-dimensional ultrasound transducer array for fast imaging, and developing a multi-modality catheter with 4-dimensional UCSDI, multi-electrode recording and echocardiography capacity.
Functional differentiation of human pluripotent stem cells on a chip.
Giobbe, Giovanni G; Michielin, Federica; Luni, Camilla; Giulitti, Stefano; Martewicz, Sebastian; Dupont, Sirio; Floreani, Annarosa; Elvassore, Nicola
2015-07-01
Microengineering human "organs-on-chips" remains an open challenge. Here, we describe a robust microfluidics-based approach for the differentiation of human pluripotent stem cells directly on a chip. Extrinsic signal modulation, achieved through optimal frequency of medium delivery, can be used as a parameter for improved germ layer specification and cell differentiation. Human cardiomyocytes and hepatocytes derived on chips showed functional phenotypes and responses to temporally defined drug treatments.
Automated analysis of biological oscillator models using mode decomposition.
Konopka, Tomasz
2011-04-01
Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
NASA Astrophysics Data System (ADS)
Issiaka Traore, Oumar; Cristini, Paul; Favretto-Cristini, Nathalie; Pantera, Laurent; Viguier-Pla, Sylvie
2018-01-01
In a context of nuclear safety experiment monitoring with the non destructive testing method of acoustic emission, we study the impact of the test device on the interpretation of the recorded physical signals by using spectral finite element modeling. The numerical results are validated by comparison with real acoustic emission data obtained from previous experiments. The results show that several parameters can have significant impacts on acoustic wave propagation and then on the interpretation of the physical signals. The potential position of the source mechanism, the positions of the receivers and the nature of the coolant fluid have to be taken into account in the definition a pre-processing strategy of the real acoustic emission signals. In order to show the relevance of such an approach, we use the results to propose an optimization of the positions of the acoustic emission sensors in order to reduce the estimation bias of the time-delay and then improve the localization of the source mechanisms.
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
NASA Astrophysics Data System (ADS)
Gaillot, P.; Bardaine, T.; Lyon-Caen, H.
2004-12-01
Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.
Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A
2011-11-01
Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.
Robust and transferable quantification of NMR spectral quality using IROC analysis
NASA Astrophysics Data System (ADS)
Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.
2017-12-01
Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.
Simulation of optical signaling among nano-bio-sensors: enhancing of bioimaging contrast.
SalmanOgli, A; Behzadi, S; Rostami, A
2014-09-01
In this article, the nanoparticle-dye systems is designed and simulated to illustrate the possibility of enhancement in optical imaging contrast. For this, the firefly optimization technique is used as an optical signaling mechanism among agents (nanoparticle-dye) because fireflies attract together due to their flashing light and optical signaling that is produced by a process of bioluminescence (also it has been investigated that other parameters such as neural response and brain function have essential role in attracting fireflies to each other). The first parameter is coincided with our work, because the nanoparticle-dye systems have ability to augment of received light and its amplification cause that the designed complex system act as a brightness particle. This induced behavior of nanoparticles can be considered as an optical communication and signaling. Indeed by functionalization of nanoparticles and then due to higher brightness of the tumor site because of active targeting, the other particles can be guided to reach toward the target point and the signaling among agents is done by optical relation similar to firefly nature. Moreover, the fundamental of this work is the use of surface plasmon resonance and plasmons hybridization, in which photonic signals can be manipulated on the nanoscale and can be used in biomedical applications such as electromagnetic field enhancement. Finally, it can be mentioned that by simultaneously using plasmon hybridization, near-field augmentation, and firefly algorithm, the optical imaging contrast can be impressively improved.
Xu, Jing; Wang, Zhongbin; Tan, Chao; Liu, Xinhua
2018-01-01
As a sound signal has the advantages of non-contacted measurement, compact structure, and low power consumption, it has resulted in much attention in many fields. In this paper, the sound signal of the coal mining shearer is analyzed to realize the accurate online cutting pattern identification and guarantee the safety quality of the working face. The original acoustic signal is first collected through an industrial microphone and decomposed by adaptive ensemble empirical mode decomposition (EEMD). A 13-dimensional set composed by the normalized energy of each level is extracted as the feature vector in the next step. Then, a swarm intelligence optimization algorithm inspired by bat foraging behavior is applied to determine key parameters of the traditional variable translation wavelet neural network (VTWNN). Moreover, a disturbance coefficient is introduced into the basic bat algorithm (BA) to overcome the disadvantage of easily falling into local extremum and limited exploration ability. The VTWNN optimized by the modified BA (VTWNN-MBA) is used as the cutting pattern recognizer. Finally, a simulation example, with an accuracy of 95.25%, and a series of comparisons are conducted to prove the effectiveness and superiority of the proposed method. PMID:29382120
Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Kent, A R; Grill, W M
2012-06-01
The clinical efficacy of deep brain stimulation (DBS) for the treatment of movement disorders depends on the identification of appropriate stimulation parameters. Since the mechanisms of action of DBS remain unclear, programming sessions can be time consuming, costly and result in sub-optimal outcomes. Measurement of electrically evoked compound action potentials (ECAPs) during DBS, generated by activated neurons in the vicinity of the stimulating electrode, could offer insight into the type and spatial extent of neural element activation and provide a potential feedback signal for the rational selection of stimulation parameters and closed-loop DBS. However, recording ECAPs presents a significant technical challenge due to the large stimulus artefact, which can saturate recording amplifiers and distort short latency ECAP signals. We developed DBS-ECAP recording instrumentation combining commercial amplifiers and circuit elements in a serial configuration to reduce the stimulus artefact and enable high fidelity recording. We used an electrical circuit equivalent model of the instrumentation to understand better the sources of the stimulus artefact and the mechanisms of artefact reduction by the circuit elements. In vitro testing validated the capability of the instrumentation to suppress the stimulus artefact and increase gain by a factor of 1000 to 5000 compared to a conventional biopotential amplifier. The distortion of mock ECAP (mECAP) signals was measured across stimulation parameters, and the instrumentation enabled high fidelity recording of mECAPs with latencies of only 0.5 ms for DBS pulse widths of 50 to 100 µs/phase. Subsequently, the instrumentation was used to record in vivo ECAPs, without contamination by the stimulus artefact, during thalamic DBS in an anesthetized cat. The characteristics of the physiological ECAP were dependent on stimulation parameters. The novel instrumentation enables high fidelity ECAP recording and advances the potential use of the ECAP as a feedback signal for the tuning of DBS parameters.
Artificial blood circulation: stabilization, physiological control, and optimization.
Lerner, A Y
1990-04-01
The requirements for creating an efficient Artificial Blood Circulation System (ABCS) have been determined. A hierarchical three-level adaptive control system is suggested for ABCS to solve the following problems: stabilization of the circulation conditions, left and right pump coordination, physiological control for maintaining a proper relation between the cardiac output and the level of gas exchange required for metabolism, and optimization of the system behavior. The adaptations to varying load and body parameters will be accomplished using the signals which characterize the real-time computer-processed values of correlations between the changes in hydraulic resistance of blood vessels, or the changes in aortic pressure, and the oxygen (or carbon dioxide) concentration.
Theory and design of interferometric synthetic aperture radars
NASA Technical Reports Server (NTRS)
Rodriguez, E.; Martin, J. M.
1992-01-01
A derivation of the signal statistics, an optimal estimator of the interferometric phase, and the expression necessary to calculate the height-error budget are presented. These expressions are used to derive methods of optimizing the parameters of the interferometric synthetic aperture radar system (InSAR), and are then employed in a specific design example for a system to perform high-resolution global topographic mapping with a one-year mission lifetime, subject to current technological constraints. A Monte Carlo simulation of this InSAR system is performed to evaluate its performance for realistic topography. The results indicate that this system has the potential to satisfy the stringent accuracy and resolution requirements for geophysical use of global topographic data.
Communication theory of quantum systems. Ph.D. Thesis, 1970
NASA Technical Reports Server (NTRS)
Yuen, H. P. H.
1971-01-01
Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.
Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita
2016-01-01
This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.
NASA Astrophysics Data System (ADS)
Vaskovskaya, T. A.
2014-12-01
This paper offers a new approach to the analysis of price signals from the wholesale electricity and capacity market that is based on the analysis of the influence exerted by input data used in the problem of optimization of the power system operating conditions, namely: parameters of a power grid and power-receiving equipment that might vary under the effect of control devices. It is shown that it would be possible to control nonregulated prices for electricity in the wholesale electricity market by varying the parameters of control devices and energy-receiving equipment. An increase in the effectiveness of power transmission and the cost-effective use of fuel-and-energy resources (energy saving) can become an additional effect of controlling the nonregulated prices.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
NASA Technical Reports Server (NTRS)
Townes, C. H.
1979-01-01
Searches for extraterrestrial intelligence concentrate on attempts to receive signals in the microwave region, the argument being given that communication occurs there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered, in particular photon detection rather than linear detection alone, and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone can be chosen for optimization by an extraterrestrial civilization.
NASA Astrophysics Data System (ADS)
Welsh, Byron M.; Reeves, Toby D.; Roggemann, Michael C.
1997-09-01
The ability to measure atmospheric turbulence characteristics such as Fried's coherence diameter, the outer scale of turbulence, and the turbulence power law are critical for the optimized operation of adaptive optical telescopes. One approach for sensing these turbulence parameters is to use a Hartmann wavefront sensor (H-WFS) array to measure the wavefront slope structure function (SSF) . The SSF is defined as the second moment of the wavefront slope difference between any two subapertures separated in time and/or space. Accurate knowledge of the SSF allows turbulence parameters to be estimated. The H-WFS slope measurements, composed of a true slope signal corrupted by noise, are used to estimate the SSF by computing a mean square difference of slope signals from different subapertures. This computation is typically performed over a large number of H-WFS measurement frames. The quality of the SSF estimate is quantified by the signal-to-noise ratio (SNR) of the estimator. The quality of the SSF estimate then can in turn be related to the quality of the atmospheric turbulence parameter estimates. This research develops a theoretical SNR expression for the SSF estimator. This SNR is a function of H-WFS geometry, the number of temporal measurement frames, the outer scale of turbulence, the turbulence spectrum power law, and the temporal properties of the turbulence. Results are presented for various H-WFS configurations and atmospheric turbulence properties.
Maurya, Mano Ram; Subramaniam, Shankar
2007-01-01
Calcium (Ca2+) is an important second messenger and has been the subject of numerous experimental measurements and mechanistic studies in intracellular signaling. Calcium profile can also serve as a useful cellular phenotype. Kinetic models of calcium dynamics provide quantitative insights into the calcium signaling networks. We report here the development of a complex kinetic model for calcium dynamics in RAW 264.7 cells stimulated by the C5a ligand. The model is developed using the vast number of measurements of in vivo calcium dynamics carried out in the Alliance for Cellular Signaling (AfCS) Laboratories. Ligand binding, phospholipase C-β (PLC-β) activation, inositol 1,4,5-trisphosphate (IP3) receptor (IP3R) dynamics, and calcium exchange with mitochondria and extracellular matrix have all been incorporated into the model. The experimental data include data from both native and knockdown cell lines. Subpopulational variability in measurements is addressed by allowing nonkinetic parameters to vary across datasets. The model predicts temporal response of Ca2+ concentration for various doses of C5a under different initial conditions. The optimized parameters for IP3R dynamics are in agreement with the legacy data. Further, the half-maximal effect concentration of C5a and the predicted dose response are comparable to those seen in AfCS measurements. Sensitivity analysis shows that the model is robust to parametric perturbations. PMID:17483174
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yu; Cao, Ruifen; Pei, Xi
2015-06-15
Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less
3D equilibrium reconstruction with islands
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; ...
2018-02-15
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
3D equilibrium reconstruction with islands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
NASA Astrophysics Data System (ADS)
Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei
2017-09-01
Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.
Ultra-long fiber Raman lasers: design considerations
NASA Astrophysics Data System (ADS)
Koltchanov, I.; Kroushkov, D. I.; Richter, A.
2015-03-01
In frame of the European Marie Currie project GRIFFON [http://astonishgriffon.net/] the usage of a green approach in terms of reduced power consumption and maintenance costs is envisioned for long-span fiber networks. This shall be accomplished by coherent transmission in unrepeatered links (100 km - 350 km) utilizing ultra-long Raman fiber laser (URFL)-based distributed amplification, multi-level modulation formats, and adapted Digital Signal Processing (DSP) algorithms. The URFL uses a cascaded 2-order pumping scheme where two (co- and counter-) ˜ 1365 nm pumps illuminate the fiber. The URFL oscillates at ˜ 1450 nm whereas amplification is provided by stimulated Raman scattering (SRS) of the ˜ 1365 nm pumps and the optical feedback is realized by two Fiber Bragg gratings (FBGs) at the fiber ends reflecting at 1450 nm. The light field at 1450 nm provides amplification for signal waves in the 1550 nm range due to SRS. In this work we present URFL design studies intended to characterize and optimize the power and noise characteristics of the fiber links. We use a bidirectional fiber model describing propagation of the signal, pump and noise powers along the fiber length. From the numerical solution we evaluate the on/off Raman gain and its bandwidth, the signal excursion over the fiber length, OSNR spectra, and the accumulated nonlinearities. To achieve best performance for these characteristics the laser design is optimized with respect to the forward/backward pump powers and wavelengths, input/output signal powers, reflectivity profile of the FBGs and other parameters.
Parameter Optimization Of Natural Hydroxyapatite/SS316l Via Metal Injection Molding (MIM)
NASA Astrophysics Data System (ADS)
Mustafa, N.; Ibrahim1, M. H. I.; Amin, A. M.; Asmawi, R.
2017-01-01
Metal injection molding (MIM) are well known as a worldwide application of powder injection molding (PIM) where as applied the shaping concept and the beneficial of plastic injection molding but develops the applications to various high performance metals and alloys, plus metal matrix composites and ceramics. This study investigates the strength of green part by using stainless steel 316L/ Natural hydroxyapatite composite as a feedstock. Stainless steel 316L (SS316L) was mixed with Natural hydroxyapatite (NHAP) by adding 40 wt. % Low Density Polyethylene and 60 %wt. Palm Stearin as a binder system at 63 wt. % powder loading consist of 90 % wt. of SS316 L and 10 wt. % NHAP prepared thru critical powder volume percentage (CPVC). Taguchi method was functional as a tool in determining the optimum green strength for Metal Injection Molding (MIM) parameters. The green strength was optimized with 4 significant injection parameter such as Injection temperature (A), Mold temperature (B), Pressure (C) and Speed (D) were selected throughout screening process. An orthogonal array of L9 (3)4 was conducted. The optimum injection parameters for highest green strength were established at A1, B2, C0 and D1 and where as calculated based on Signal to Noise Ratio.
While, Peter T; Teruel, Jose R; Vidić, Igor; Bathen, Tone F; Goa, Pål Erik
2018-06-01
To explore the relationship between relative enhanced diffusivity (RED) and intravoxel incoherent motion (IVIM), as well as the impact of noise and the choice of intermediate diffusion weighting (b value) on the RED parameter. A mathematical derivation was performed to cast RED in terms of the IVIM parameters. Noise analysis and b value optimization was conducted by using Monte Carlo calculations to generate diffusion-weighted imaging data appropriate to breast and liver tissue at three different signal-to-noise ratios. RED was shown to be approximately linearly proportional to the IVIM parameter f, inversely proportional to D and to follow an inverse exponential decay with respect to D*. The choice of intermediate b value was shown to be important in minimizing the impact of noise on RED and in maximizing its discriminatory power. RED was shown to be essentially a reparameterization of the IVIM estimates for f and D obtained with three b values. RED imaging in the breast and liver should be performed with intermediate b values of 100 and 50 s/mm 2 , respectively. Future clinical studies involving RED should also estimate the IVIM parameters f and D using three b values for comparison.
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.
2015-12-01
The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Man Gyun; Oh, Seungrohk
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce themore » time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.« less
Nakayama, Tomohiro; Nishie, Akihiro; Yoshiura, Takashi; Asayama, Yoshiki; Ishigami, Kousei; Kakihara, Daisuke; Obara, Makoto; Honda, Hiroshi
2015-12-01
To show the feasibility of motion-sensitized driven-equilibrium-balanced magnetic resonance cholangiopancreatography and to determine the optimal velocity encoding (VENC) value. Sixteen healthy volunteers underwent MRI study using a 1.5-T clinical unit and a 32-channel body array coil. For each volunteer, images were obtained using the following seven respiratory-triggered sequences: (1) balanced magnetic resonance cholangiopancreatography without motion-sensitized driven-equilibrium, and (2)-(7) balanced magnetic resonance cholangiopancreatography with motion-sensitized driven-equilibrium, with VENC=1, 3, 5, 7, 9 and ∞cm/s for the x-, y-, and z-directions, respectively. Quantitative evaluation was obtained by measuring the maximum signal intensity of the common hepatic duct, portal vein, liver tissue including visible peripheral vessels, and liver tissue excluding visible peripheral vessels that were evaluated. We compared the contrast ratios of portal vein/common hepatic duct, liver tissue including visible peripheral vessels/common hepatic duct and liver tissue excluding visible peripheral vessels/common hepatic duct among the five finite sequences (VENC=1, 3, 5, 7, and 9cm/s). Statistical comparisons were performed using the t-test for paired data with the Bonferroni correction. Suppression of blood vessel signals was achieved with motion-sensitized driven-equilibrium sequences. We found the optimal VENC values to be either 3 or 5cm/s with the best suppression of relative vessel signals to bile ducts. At a lower VENC value (1cm/s), the bile duct signal was reduced, presumably due to minimal biliary flow. The feasibility of motion-sensitized driven-equilibrium-balanced magnetic resonance cholangiopancreatography was suggested. The optimal VENC value was considered to be either 3 or 5cm/s. The clinical usefulness of this new magnetic resonance cholangiopancreatography sequence needs to be verified by further studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Kesai; Gao, Jie; Ju, Xiaodong; Zhu, Jun; Xiong, Yanchun; Liu, Shuai
2018-05-01
This paper proposes a new tool design of ultra-deep azimuthal electromagnetic (EM) resistivity logging while drilling (LWD) for deeper geosteering and formation evaluation, which can benefit hydrocarbon exploration and development. First, a forward numerical simulation of azimuthal EM resistivity LWD is created based on the fast Hankel transform (FHT) method, and its accuracy is confirmed under classic formation conditions. Then, a reasonable range of tool parameters is designed by analyzing the logging response. However, modern technological limitations pose challenges to selecting appropriate tool parameters for ultra-deep azimuthal detection under detectable signal conditions. Therefore, this paper uses grey relational analysis (GRA) to quantify the influence of tool parameters on voltage and azimuthal investigation depth. After analyzing thousands of simulation data under different environmental conditions, the random forest is used to fit data and identify an optimal combination of tool parameters due to its high efficiency and accuracy. Finally, the structure of the ultra-deep azimuthal EM resistivity LWD tool is designed with a theoretical azimuthal investigation depth of 27.42-29.89 m in classic different isotropic and anisotropic formations. This design serves as a reliable theoretical foundation for efficient geosteering and formation evaluation in high-angle and horizontal (HA/HZ) wells in the future.
NASA Astrophysics Data System (ADS)
Schönert, Stefan; Lasserre, Thierry; Oberauer, Lothar
2003-03-01
In the forthcoming months, the KamLAND experiment will probe the parameter space of the solar large mixing angle MSW solution as the origin of the solar neutrino deficit with ν¯e's from distant nuclear reactors. If however the solution realized in nature is such that Δm2sol>~2×10-4 eV2 (thereafter named the HLMA region), KamLAND will only observe a rate suppression but no spectral distortion and hence it will not have the optimal sensitivity to measure the mixing parameters. In this case, we propose a new medium baseline reactor experiment located at Heilbronn (Germany) to pin down the precise value of the solar mixing parameters. In this paper, we present the Heilbronn detector site, we calculate the ν¯e interaction rate and the positron spectrum expected from the surrounding nuclear power plants. We also discuss the sensitivity of such an experiment to |Ue3| in both normal and inverted neutrino mass hierarchy scenarios. We then outline the detector design, estimate background signals induced by natural radioactivity as well as by in situ cosmic ray muon interaction, and discuss a strategy to detect the anti-neutrino signal `free of background'.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas
This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
21SSD: a public data base of simulated 21-cm signals from the epoch of reionization
NASA Astrophysics Data System (ADS)
Semelin, B.; Eames, E.; Bolgar, F.; Caillat, M.
2017-12-01
The 21-cm signal from the epoch of reionization (EoR) is expected to be detected in the next few years, either with existing instruments or by the upcoming SKA and HERA projects. In this context, there is a pressing need for publicly available high-quality templates covering a wide range of possible signals. These are needed both for end-to-end simulations of the up-coming instruments and to develop signal analysis methods. We present such a set of templates, publicly available, for download at 21ssd.obspm.fr. The data base contains 21-cm brightness temperature lightcones at high and low resolution, and several derived statistical quantities for 45 models spanning our choice of 3D parameter space. These data are the result of fully coupled radiative hydrodynamic high-resolution (10243) simulations performed with the LICORICE code. Both X-ray and Lyman line transfer are performed to account for heating and Wouthuysen-Field coupling fluctuations. We also present a first exploitation of the data using the power spectrum and the pixel distribution function (PDF) computed from lightcone data. We analyse how these two quantities behave when varying the model parameters while taking into account the thermal noise expected of a typical SKA survey. Finally, we show that the noiseless power spectrum and PDF have different - and somewhat complementary - abilities to distinguish between different models. This preliminary result will have to be expanded to the case including thermal noise. This type of results opens the door to formulating an optimal sampling of the parameter space, dependent on the chosen diagnostics.
Results and Error Estimates from GRACE Forward Modeling over Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-04-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.
Takino, Masahiko; Daishima, Shigeki; Nakahara, Taketoshi
2003-01-01
This paper describes a comparison between atmospheric pressure chemical ionization (APCI) and the recently introduced atmospheric pressure photoionization (APPI) technique for the liquid chromatography/mass spectrometric (LC/MS) determination of patulin in clear apple juice. A column switching technique for on-line extraction of clear apple juice was developed. The parameters investigated for the optimization of APPI were the ion source parameters fragmentor voltage, capillary voltage, and vaporizer temperature, and also mobile phase composition and flow rate. Furthermore, chemical noise and signal suppression of analyte signals due to sample matrix interference were investigated for both APCI and APPI. The results indicated that APPI provides lower chemical noise and signal suppression in comparison with APCI. The linear range for patulin in apple juice (correlation coefficient >0.999) was 0.2-100 ng mL(-1). Mean recoveries of patulin in three apple juices ranged from 94.5 to 103.2%, and the limit of detection (S/N = 3), repeatability and reproducibility were 1.03-1.50 ng mL(-1), 3.9-5.1% and 7.3-8.2%, respectively. The total analysis time was 10.0 min. Copyright 2003 John Wiley & Sons, Ltd.
Inhomogeneous ensembles of radical pairs in chemical compasses
Procopio, Maria; Ritz, Thorsten
2016-01-01
The biophysical basis for the ability of animals to detect the geomagnetic field and to use it for finding directions remains a mystery of sensory biology. One much debated hypothesis suggests that an ensemble of specialized light-induced radical pair reactions can provide the primary signal for a magnetic compass sensor. The question arises what features of such a radical pair ensemble could be optimized by evolution so as to improve the detection of the direction of weak magnetic fields. Here, we focus on the overlooked aspect of the noise arising from inhomogeneity of copies of biomolecules in a realistic biological environment. Such inhomogeneity leads to variations of the radical pair parameters, thereby deteriorating the signal arising from an ensemble and providing a source of noise. We investigate the effect of variations in hyperfine interactions between different copies of simple radical pairs on the directional response of a compass system. We find that the choice of radical pair parameters greatly influences how strongly the directional response of an ensemble is affected by inhomogeneity. PMID:27804956
Precision analysis of the photomultiplier response to ultra low signals
NASA Astrophysics Data System (ADS)
Degtiarenko, Pavel
2017-11-01
A new computational model for the description of the photon detector response functions measured in conditions of low light is presented, together with examples of the observed photomultiplier signal amplitude distributions, successfully described using the parameterized model equation. In extension to the previously known approximations, the new model describes the underlying discrete statistical behavior of the photoelectron cascade multiplication processes in photon detectors with complex non-uniform gain structure of the first dynode. Important features of the model include the ability to represent the true single-photoelectron spectra from different photomultipliers with a variety of parameterized shapes, reflecting the variability in the design and in the individual parameters of the detectors. The new software tool is available for evaluation of the detectors' performance, response, and efficiency parameters that may be used in various applications including the ultra low background experiments such as the searches for Dark Matter and rare decays, underground neutrino studies, optimizing operations of the Cherenkov light detectors, help in the detector selection procedures, and in the experiment simulations.
Inhomogeneous ensembles of radical pairs in chemical compasses
NASA Astrophysics Data System (ADS)
Procopio, Maria; Ritz, Thorsten
2016-11-01
The biophysical basis for the ability of animals to detect the geomagnetic field and to use it for finding directions remains a mystery of sensory biology. One much debated hypothesis suggests that an ensemble of specialized light-induced radical pair reactions can provide the primary signal for a magnetic compass sensor. The question arises what features of such a radical pair ensemble could be optimized by evolution so as to improve the detection of the direction of weak magnetic fields. Here, we focus on the overlooked aspect of the noise arising from inhomogeneity of copies of biomolecules in a realistic biological environment. Such inhomogeneity leads to variations of the radical pair parameters, thereby deteriorating the signal arising from an ensemble and providing a source of noise. We investigate the effect of variations in hyperfine interactions between different copies of simple radical pairs on the directional response of a compass system. We find that the choice of radical pair parameters greatly influences how strongly the directional response of an ensemble is affected by inhomogeneity.
Investigation of schedules for traffic signal timing optimization.
DOT National Transportation Integrated Search
2005-01-01
Traffic signal optimization is recognized as one of the most cost-effective ways to improve urban mobility; however the extent of the benefits realized could significantly depend on how often traffic signal re-optimization occurs. Using a case study ...
Feasibility of ASL spinal bone marrow perfusion imaging with optimized inversion time.
Xing, Dong; Zha, Yunfei; Yan, Liyong; Wang, Kejun; Gong, Wei; Lin, Hui
2015-11-01
To assess the correlation between flow-sensitive alternating inversion recovery (FAIR) and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the measurement of spinal bone marrow (SBM) perfusion; in addition, to assess for an optimized inversion time (TI) as well as the reproducibility of SBM FAIR perfusion. The optimized TI of a FAIR SBM perfusion experiment was carried out on 14 volunteers; two adjacent vertebral bodies were selected from each volunteer to measure the change of signal intensity (ΔM) and the signal-to-noise ratio (SNR) of FAIR perfusion MRI with five different TIs. Then, reproducibility of FAIR data from 10 volunteers was assessed by the reposition SBM FAIR experiments. Finally, FAIR and DCE-MRI were performed on 27 subjects. The correlation between the blood flow on FAIR (BFASL ) and perfusion-related parameters on DCE-MRI was evaluated. The maximum value of ΔM and SNR were 36.39 ± 12.53 and 2.38 ± 0.97, respectively; both were obtained when TI was near 1200 msec. There were no significant difference between the two successive measurements of SBM BFASL perfusion (P = 0.879), and the within-subject coefficients of variation (wCV) of the measurements was 3.28%. The BFASL showed a close correlation with K(trans) (P < 0.001) and Kep (P = 0.004), and no correlation with Ve (P = 0.082) was found. 1200 msec was the optimal TI for the SBM ASL perfusion image, which led to the maximum ΔM and a good quality perfusion image. The SBM FAIR perfusion scan protocol has good reproducibility, and as blood flow measurement on FAIR is reliable and closely related with the parameters on DCE-MRI, FAIR is feasible for measuring SBM blood flow. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bertke, Maik; Hamdana, Gerry; Wu, Wenze; Suryo Wasisto, Hutomo; Uhde, Erik; Peiner, Erwin
2017-06-01
In this paper, the asymmetric resonance frequency (f 0) responses of thermally in-plane excited silicon cantilevers for a pocket-sized, cantilever-based airborne nanoparticle detector (Cantor) are analysed. By measuring the shift of f 0 caused by the deposition of nanoparticles (NPs), the cantilevers are used as a microbalance. The cantilever sensors are low cost manufactured from silicon by bulk-micromachining techniques and contain an integrated p-type heating actuator and a sensing piezoresistive Wheatstone bridge. f 0 is tracked by a homemade phase-locked loop (PPL) for real-time measurements. To optimize the sensor performance, a new cantilever geometry was designed, fabricated and characterized by its frequency responses. The most significant characterisation parameters of our application are f 0 and the quality factor (Q), which have high influences on sensitivity and efficiency of the NP detector. Regarding the asymmetric resonance signal, a novel fitting function based on the Fano resonance replacing the conventionally used function of the simple harmonic oscillator and a method to calculate Q by its fitting parameters were developed for a quantitative evaluation. To obtain a better understanding of the resonance behaviours, we analysed the origin of the asymmetric line shapes. Therefore, we compared the frequency response of the on-chip thermal excitation with an external excitation using an in-plane piezo actuator. In correspondence to the Fano effect, we could reconstruct the measured resonance curves by coupling two signals with constant amplitude and the expected signal of the cantilever, respectively. Moreover, the phase of the measurement signal can be analysed by this method, which is important to understand the locking process of the PLL circuit. Besides the frequency analysis, experimental results and calibration measurements with different particle types are presented. Using the described analysis method, decent results to optimize a next generation of Cantor are expected.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
NASA Technical Reports Server (NTRS)
Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.
1971-01-01
High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.
Wang, Z C; Zhong, X Y; Jin, L; Chen, X F; Moritomo, Y; Mayer, J
2017-05-01
Electron energy-loss magnetic chiral dichroism (EMCD) spectroscopy, which is similar to the well-established X-ray magnetic circular dichroism spectroscopy (XMCD), can determine the quantitative magnetic parameters of materials with high spatial resolution. One of the major obstacles in quantitative analysis using the EMCD technique is the relatively poor signal-to-noise ratio (SNR), compared to XMCD. Here, in the example of a double perovskite Sr 2 FeMoO 6 , we predicted the optimal dynamical diffraction conditions such as sample thickness, crystallographic orientation and detection aperture position by theoretical simulations. By using the optimized conditions, we showed that the SNR of experimental EMCD spectra can be significantly improved and the error of quantitative magnetic parameter determined by EMCD technique can be remarkably lowered. Our results demonstrate that, with enhanced SNR, the EMCD technique can be a unique tool to understand the structure-property relationship of magnetic materials particularly in the high-density magnetic recording and spintronic devices by quantitatively determining magnetic structure and properties at the nanometer scale. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Ke; Chen, Peng
2011-01-01
Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called "relative ratio symptom parameters" are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks.
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D.; Kluger, Yuval
2012-01-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development. PMID:22307239
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments.
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D; Kluger, Yuval
2012-05-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development.
Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2018-08-01
To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
The requirements for low-temperature plasma ionization support miniaturization of the ion source.
Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia
2018-06-01
Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.
DOT National Transportation Integrated Search
2005-03-01
The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...
Signal timing on a shoestring.
DOT National Transportation Integrated Search
2005-03-01
The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed
Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less
NASA Astrophysics Data System (ADS)
Steckiewicz, Adam; Butrylo, Boguslaw
2017-08-01
In this paper we discussed the results of a multi-criteria optimization scheme as well as numerical calculations of periodic conductive structures with selected geometry. Thin printed structures embedded on a flexible dielectric substrate may be applied as simple, cheap, passive low-pass filters with an adjustable cutoff frequency in low (up to 1 MHz) radio frequency range. The analysis of an electromagnetic phenomena in presented structures was realized on the basis of a three-dimensional numerical model of three proposed geometries of periodic elements. The finite element method (FEM) was used to obtain a solution of an electromagnetic harmonic field. Equivalent lumped electrical parameters of printed cells obtained in such manner determine the shape of an amplitude transmission characteristic of a low-pass filter. A nonlinear influence of a printed cell geometry on equivalent parameters of cells electric model, makes it difficult to find the desired optimal solution. Therefore an optimization problem of optimal cell geometry estimation with regard to an approximation of the determined amplitude transmission characteristic with an adjusted cutoff frequency, was obtained by the particle swarm optimization (PSO) algorithm. A dynamically suitable inertia factor was also introduced into the algorithm to improve a convergence to a global extremity of a multimodal objective function. Numerical results as well as PSO simulation results were characterized in terms of approximation accuracy of predefined amplitude characteristics in a pass-band, stop-band and cutoff frequency. Three geometries of varying degrees of complexity were considered and their use in signal processing systems was evaluated.
NASA Astrophysics Data System (ADS)
Gao, Pu; Xiang, Changle; Liu, Hui; Zhou, Han
2018-07-01
Based on a multiple degrees of freedom dynamic model of a vehicle powertrain system, natural vibration analyses and sensitivity analyses of the eigenvalues are performed to determine the key inertia for each natural vibration of a powertrain system. Then, the results are used to optimize the installation position of each adaptive tuned vibration absorber. According to the relationship between the variable frequency torque excitation and the natural vibration of a powertrain system, the entire vibration frequency band is divided into segments, and the auxiliary vibration absorber and dominant vibration absorber are determined for each sensitive frequency band. The optimum parameters of the auxiliary vibration absorber are calculated based on the optimal frequency ratio and the optimal damping ratio of the passive vibration absorber. The instantaneous change state of the natural vibrations of a powertrain system with adaptive tuned vibration absorbers is studied, and the optimized start and stop tuning frequencies of the adaptive tuned vibration absorber are obtained. These frequencies can be translated into the optimum parameters of the dominant vibration absorber. Finally, the optimal tuning scheme for the adaptive tuned vibration absorber group, which can be used to reduce the variable frequency vibrations of a powertrain system, is proposed, and corresponding numerical simulations are performed. The simulation time history signals are transformed into three-dimensional information related to time, frequency and vibration energy via the Hilbert-Huang transform (HHT). A comprehensive time-frequency analysis is then conducted to verify that the optimal tuning scheme for the adaptive tuned vibration absorber group can significantly reduce the variable frequency vibrations of a powertrain system.
Qazi, Abroon Jamal; de Silva, Clarence W.
2014-01-01
This paper uses a quarter model of an automobile having passive and semiactive suspension systems to develop a scheme for an optimal suspension controller. Semi-active suspension is preferred over passive and active suspensions with regard to optimum performance within the constraints of weight and operational cost. A fuzzy logic controller is incorporated into the semi-active suspension system. It is able to handle nonlinearities through the use of heuristic rules. Particle swarm optimization (PSO) is applied to determine the optimal gain parameters for the fuzzy logic controller, while maintaining within the normalized ranges of the controller inputs and output. The performance of resulting optimized system is compared with different systems that use various control algorithms, including a conventional passive system, choice options of feedback signals, and damping coefficient limits. Also, the optimized semi-active suspension system is evaluated for its performance in relation to variation in payload. Furthermore, the systems are compared with respect to the attributes of road handling and ride comfort. In all the simulation studies it is found that the optimized fuzzy logic controller surpasses the other types of control. PMID:24574868
Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.
Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing
2009-08-21
Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
Genç, Nevim; Doğan, Esra Can; Narcı, Ali Oğuzhan; Bican, Emine
2017-05-01
In this study, a multi-response optimization method using Taguchi's robust design approach is proposed for imidacloprid removal by reverse osmosis. Tests were conducted with different membrane type (BW30, LFC-3, CPA-3), transmembrane pressure (TMP = 20, 25, 30 bar), volume reduction factor (VRF = 2, 3, 4), and pH (3, 7, 11). Quality and quantity of permeate are optimized with the multi-response characteristics of the total dissolved solid (TDS), conductivity, imidacloprid, and total organic carbon (TOC) rejection ratios and flux of permeate. The optimized conditions were determined as membrane type of BW30, TMP 30 bar, VRF 3, and pH 11. Under these conditions, TDS, conductivity, imidacloprid, and TOC rejections and permeate flux were 97.50 97.41, 97.80, 98.00% and 30.60 L/m2·h, respectively. Membrane type was obtained as the most effective factor; its contribution is 64%. The difference between the predicted and observed value of multi-response signal/noise (MRSN) is within the confidence interval.
Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations
NASA Astrophysics Data System (ADS)
Romanihin, S. M.; Tronin, I. V.
2016-09-01
We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.
Study of array plasma antenna parameters
NASA Astrophysics Data System (ADS)
Kumar, Rajneesh; Kumar, Prince
2018-04-01
This paper is aimed to investigate the array plasma antenna parameters to help the optimization of an array plasma antenna. Single plasma antenna is transformed into array plasma antenna by changing the operating parameters. The re-configurability arises in the form of striations, due to transverse bifurcation of plasma column by changing the operating parameters. Each striation can be treated as an antenna element and system performs like an array plasma antenna. In order to achieve the goal of this paper, three different configurations of array plasma antenna (namely Array 1, Array 2 and Array 3) are simulated. The observations are made on variation in antenna parameters like resonance frequency, radiation pattern, directivity and gain with variation in length and number of antenna elements for each array plasma antenna. Moreover experiments are also performed and results are compared with simulation. Further array plasma antenna parameters are also compared with monopole plasma antenna parameters. The study of present paper invoke the array plasma antenna can be applied for steering and controlling the strength of Wi-Fi signals as per requirement.
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines
Presas, Alexandre; Valero, Carme; Egusquiza, Eduard
2018-01-01
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.
Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard
2018-03-30
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.
In-line mixing states monitoring of suspensions using ultrasonic reflection technique.
Zhan, Xiaobin; Yang, Yili; Liang, Jian; Zou, Dajun; Zhang, Jiaqi; Feng, Luyi; Shi, Tielin; Li, Xiwen
2016-02-01
Based on the measurement of echo signal changes caused by different concentration distributions in the mixing process, a simple ultrasonic reflection technique is proposed for in-line monitoring of the mixing states of suspensions in an agitated tank in this study. The relation between the echo signals and the concentration of suspensions is studied, and the mixing process of suspensions is tracked by in-line measurement of ultrasonic echo signals using two ultrasonic sensors. Through the analysis of echo signals over time, the mixing states of suspensions are obtained, and the homogeneity of suspensions is quantified. With the proposed technique, the effects of impeller diameter and agitation speed on the mixing process are studied, and the optimal agitation speed and the minimum mixing time to achieve the maximum homogeneity are acquired under different operating conditions and design parameters. The proposed technique is stable and feasible and shows great potential for in-line monitoring of mixing states of suspensions. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding
2018-04-01
The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.
2017-10-01
Body MRI Experiments. Enc. Magn. Reson. 2007, DOI: 10.1002/9780470034590.emrstm0491. (59) Minard, K. R.; Wind , R. A. Solenoidal microcoil designPart II...Optimizing winding parameters for maximum signal-to-noise perform- ance. Concepts Magn. Reson. 2001, 13, 190−210. (60) Danieli, E.; Perlo, J...NMR spinner turbine was adjusted for the detection of the gas phase just above the liquid (Figure S2). Next, the displacement of HP propane from the
The Construction of a Vague Fuzzy Measure Through L1 Parameter Optimization
2012-08-26
Programming v. 1.21, http://cvxr.com/cvx, (2011) 11 [3] E.J. Candes, J. Romberg and T. Tao. Robust Uncertainty Principles: Exact Signal Reconstruction From...Annales de I’institut Fourer, 5 (1954), pp. 131-295 [9] D. Diakoulaki, C. Antunes and A. Martins. MCDA in Energy Planning, Int. Series in Operations...formance and Tests , Fuzzy Sets and Systems, Vol. 65, Issues 2-3 (1994), pp.255-271 [15] M. Grabisch. Fuzzy Integral in Multicriteria Decision Making, Fuzzy
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
An optimal general type-2 fuzzy controller for Urban Traffic Network.
Khooban, Mohammad Hassan; Vafamand, Navid; Liaghat, Alireza; Dragicevic, Tomislav
2017-01-01
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the Traffic Information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal frequency range for medical radar measurements of human heartbeats using body-contact radar.
Brovoll, Sverre; Aardal, Øyvind; Paichard, Yoann; Berger, Tor; Lande, Tor Sverre; Hamran, Svein-Erik
2013-01-01
In this paper the optimal frequency range for heartbeat measurements using body-contact radar is experimentally evaluated. A Body-contact radar senses electromagnetic waves that have penetrated the human body, but the range of frequencies that can be used are limited by the electric properties of the human tissue. The optimal frequency range is an important property needed for the design of body-contact radar systems for heartbeat measurements. In this study heartbeats are measured using three different antennas at discrete frequencies from 0.1 - 10 GHz, and the strength of the received heartbeat signal is calculated. To characterize the antennas, when in contact with the body, two port S-parameters(†) are measured for the antennas using a pork rib as a phantom for the human body. The results shows that frequencies up to 2.5 GHz can be used for heartbeat measurements with body-contact radar.
Thoss, Franz; Bartsch, Bengt
2017-12-01
In experimental studies, we could show that the visual threshold of man is influenced by the geomagnetic field. One of the results was that the threshold shows periodic fluctuations when the vertical component of the field is reversed periodically. The maximum of these oscillations occurred at a period duration of 110 s. To explain this phenomenon, we chose the process that likely underlies the navigation of birds in the geomagnetic field: the light reaction of the FAD component of cryptochrome in the retina. The human retina contains cryptpochrome like the bird retina. Based on the investigations of Müller and Ahmad (J Biol Chem 286:21033-21040, 2011) and Solov'yov and Schulten (J Phys Chem B 116:1089-1099, 2012), we designed a model of the light-induced reduction and subsequent reoxidation of FAD. This model contains a radical pair, whose interconversion dynamics are affected by the geomagnetic field. The parameters of the model were partly calculated from the data of our experimental investigation and partly taken from the results of other authors. These parameters were then optimized by adjusting the model behaviour to the experimental results. The simulation of the finished model shows that the concentrations of all substances included show really oscillations with the frequency of the modelled magnetic field. After optimization of the parameters, the oscillations of FAD and FADH* show maximal amplitude at a period duration of 110 s, as was observed in the experiment. This makes it most likely that the signal, which influences the visual system, originates from FADH* (signalling state).
NASA Astrophysics Data System (ADS)
Canora, C. P.; Moral, A. G.; Rull, F.; Maurice, S.; Hutchinson, I.; Ramos, G.; López-Reyes, G.; Belenguer, T.; Canchal, R.; Prieto, J. A. R.; Rodriguez, P.; Santamaria, P.; Berrocal, A.; Colombo, M.; Gallago, P.; Seoane, L.; Quintana, C.; Ibarmia, S.; Zafra, J.; Saiz, J.; Santiago, A.; Marin, A.; Gordillo, C.; Escribano, D.; Sanz-Palominoa, M.
2017-09-01
The Raman Laser Spectrometer (RLS) is one of the Pasteur Payload instruments, within the ESA's Aurora Exploration Programme, ExoMars mission. Raman spectroscopy is based on the analysis of spectral fingerprints due to the inelastic scattering of light when interacting with matter. RLS is composed by Units: SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit) and the harnesses (EH and OH). The iOH focuses the excitation laser on the samples and collects the Raman emission from the sample via SPU (CCD) and the video data (analog) is received, digitalizing it and transmiting it to the processor module (ICEU). The main sources of noise arise from the sample, the background, and the instrument (Laser, CCD, focuss, acquisition parameters, operation control). In this last case the sources are mainly perturbations from the optics, dark signal and readout noise. Also flicker noise arising from laser emission fluctuations can be considered as instrument noise. In order to evaluate the SNR of a Raman instrument in a practical manner it is useful to perform end-to-end measurements on given standards samples. These measurements have to be compared with radiometric simulations using Raman efficiency values from literature and taking into account the different instrumental contributions to the SNR. The RLS EQM instrument performances results and its functionalities have been demonstrated in accordance with the science expectations. The Instrument obtained SNR performances in the RLS EQM will be compared experimentally and via analysis, with the Instrument Radiometric Model tool. The characterization process for SNR optimization is still on going. The operational parameters and RLS algorithms (fluorescence removal and acquisition parameters estimation) will be improved in future models (EQM-2) until FM Model delivery.
Ultra-sensitive probe of spectral line structure and detection of isotopic oxygen
NASA Astrophysics Data System (ADS)
Garner, Richard M.; Dharamsi, A. N.; Khan, M. Amir
2018-01-01
We discuss a new method of investigating and obtaining quantitative behavior of higher harmonic (> 2f) wavelength modulation spectroscopy (WMS) based on the signal structure. It is shown that the spectral structure of higher harmonic WMS signals, quantified by the number of zero crossings and turnings points, can have increased sensitivity to ambient conditions or line-broadening effects from changes in temperature, pressure, or optical depth. The structure of WMS signals, characterized by combinations of signal magnitude and spectral locations of turning points and zero crossings, provides a unique scale that quantifies lineshape parameters and, thus, useful in optimization of measurements obtained from multi-harmonic WMS signals. We demonstrate this by detecting weaker rotational-vibrational transitions of isotopic atmospheric oxygen (16O18O) in the near-infrared region where higher harmonic WMS signals are more sensitive contrary to their signal-to-noise ratio considerations. The proposed approach based on spectral structure provides the ability to investigate and quantify signals not only at linecenter but also in the wing region of the absorption profile. This formulation is particularly useful in tunable diode laser spectroscopy and ultra-precision laser-based sensors where absorption signal profile carries information of quantities of interest, e.g., concentration, velocity, or gas collision dynamics, etc.
NASA Astrophysics Data System (ADS)
Zhang, J. Y.; Jiang, Y.
2017-10-01
To ensure satisfactory dynamic performance of controllers in time-delayed power systems, a WAMS-based control strategy is investigated in the presence of output feedback delay. An integrated approach based on Pade approximation and particle swarm optimization (PSO) is employed for parameter configuration of PSS. The coordination configuration scheme of power system controllers is achieved by a series of stability constraints at the aim of maximizing the minimum damping ratio of inter-area mode of power system. The validity of this derived PSS is verified on a prototype power system. The findings demonstrate that the proposed approach for control design could damp the inter-area oscillation and enhance the small-signal stability.
Raman-tailored photonic crystal fiber for telecom band photon-pair generation.
Cordier, M; Orieux, A; Gabet, R; Harlé, T; Dubreuil, N; Diamanti, E; Delaye, P; Zaquine, I
2017-07-01
We report on the experimental characterization of a novel nonlinear liquid-filled hollow-core photonic crystal fiber for the generation of photon pairs at a telecommunication wavelength through spontaneous four-wave mixing (SFWM). We show that the optimization procedure in view of this application links the choice of the nonlinear liquid to the design parameters of the fiber, and we give an example of such an optimization at telecom wavelengths. Combining the modeling of the fiber and classical characterization techniques at these wavelengths, we identify for the chosen fiber and liquid combination SFWM phase-matching frequency ranges with no Raman scattering noise contamination. This is a first step toward obtaining a telecom band fibered photon-pair source with a high signal-to-noise ratio.
Optimizing laser crater enhanced Raman scattering spectroscopy
NASA Astrophysics Data System (ADS)
Lednev, V. N.; Sdvizhenskii, P. A.; Grishin, M. Ya.; Fedorov, A. N.; Khokhlova, O. V.; Oshurko, V. B.; Pershin, S. M.
2018-05-01
The laser crater enhanced Raman scattering (LCERS) spectroscopy technique has been systematically studied for chosen sampling strategy and influence of powder material properties on spectra intensity enhancement. The same nanosecond pulsed solid state Nd:YAG laser (532 nm, 10 ns, 0.1-1.5 mJ/pulse) was used for laser crater production and Raman scattering experiments for L-aspartic acid powder. Increased sampling area inside crater cavity is the key factor for Raman signal improvement for the LCERS technique, thus Raman signal enhancement was studied as a function of numerous experimental parameters including lens-to-sample distance, wavelength (532 and 1064 nm) and laser pulse energy utilized for crater production. Combining laser pulses of 1064 and 532 nm wavelengths for crater ablation was shown to be an effective way for additional LCERS signal improvement. Powder material properties (particle size distribution, powder compactness) were demonstrated to affect LCERS measurements with better results achieved for smaller particles and lower compactness.
Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum
Lv, Yong; Zhu, Qinglin; Yuan, Rui
2015-01-01
The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105
The extended Fourier transform for 2D spectral estimation.
Armstrong, G S; Mandelshtam, V A
2001-11-01
We present a linear algebraic method, named the eXtended Fourier Transform (XFT), for spectral estimation from truncated time signals. The method is a hybrid of the discrete Fourier transform (DFT) and the regularized resolvent transform (RRT) (J. Chen et al., J. Magn. Reson. 147, 129-137 (2000)). Namely, it estimates the remainder of a finite DFT by RRT. The RRT estimation corresponds to solution of an ill-conditioned problem, which requires regularization. The regularization depends on a parameter, q, that essentially controls the resolution. By varying q from 0 to infinity one can "tune" the spectrum between a high-resolution spectral estimate and the finite DFT. The optimal value of q is chosen according to how well the data fits the form of a sum of complex sinusoids and, in particular, the signal-to-noise ratio. Both 1D and 2D XFT are presented with applications to experimental NMR signals. Copyright 2001 Academic Press.
Non-Gaussian, non-dynamical stochastic resonance
NASA Astrophysics Data System (ADS)
Szczepaniec, Krzysztof; Dybiec, Bartłomiej
2013-11-01
The classical model revealing stochastic resonance is a motion of an overdamped particle in a double-well fourth order potential when combined action of noise and external periodic driving results in amplifying of weak signals. Resonance behavior can also be observed in non-dynamical systems. The simplest example is a threshold triggered device. It consists of a periodic modulated input and noise. Every time an output crosses the threshold the signal is recorded. Such a digitally filtered signal is sensitive to the noise intensity. There exists the optimal value of the noise intensity resulting in the "most" periodic output. Here, we explore properties of the non-dynamical stochastic resonance in non-equilibrium situations, i.e. when the Gaussian noise is replaced by an α-stable noise. We demonstrate that non-equilibrium α-stable noises, depending on noise parameters, can either weaken or enhance the non-dynamical stochastic resonance.
NASA Astrophysics Data System (ADS)
Zhu, Hui; Shan, Xuekang; Sun, Xiaohan
2017-10-01
A method for reconstructing the vibration waveform from the optical time-domain backscattering pulses in the distributed optical fiber sensing system (DOFSS) is proposed, which allows for extracting and recovering the external vibration signal from the tested pulses by analog signal processing, so that can obtain vibration location and waveform simultaneously. We establish the response model of DOFSS to the external vibration and analyze the effects of system parameters on the operational performance. The main parts of the DOFSS are optimized, including delay fiber length and wavelength, to improve the sensitivity of the system. The experimental system is set up and the vibration amplitudes and reconstructed waveforms are fit well with the original driving signal. The experimental results demonstrate that the performance of vibration waveform reconstruction is good with SNR of 15 dB whenever the external vibrations with different intensities and frequencies exert on the sensing fiber.
Dirscherl, Thomas; Rickhey, Mark; Bogner, Ludwig
2012-02-01
A biologically adaptive radiation treatment method to maximize the TCP is shown. Functional imaging is used to acquire a heterogeneous dose prescription in terms of Dose Painting by Numbers and to create a patient-specific IMRT plan. Adapted from a method for selective dose escalation under the guidance of spatial biology distribution, a model, which translates heterogeneously distributed radiobiological parameters into voxelwise dose prescriptions, was developed. At the example of a prostate case with (18)F-choline PET imaging, different sets of reported values for the parameters were examined concerning their resulting range of dose values. Furthermore, the influence of each parameter of the linear-quadratic model was investigated. A correlation between PET signal and proliferation as well as cell density was assumed. Using our in-house treatment planning software Direct Monte Carlo Optimization (DMCO), a treatment plan based on the obtained dose prescription was generated. Gafchromic EBT films were irradiated for evaluation. When a TCP of 95% was aimed at, the maximal dose in a voxel of the prescription exceeded 100Gy for most considered parameter sets. One of the parameter sets resulted in a dose range of 87.1Gy to 99.3Gy, yielding a TCP of 94.7%, and was investigated more closely. The TCP of the plan decreased to 73.5% after optimization based on that prescription. The dose difference histogram of optimized and prescribed dose revealed a mean of -1.64Gy and a standard deviation of 4.02Gy. Film verification showed a reasonable agreement of planned and delivered dose. If the distribution of radiobiological parameters within a tumor is known, this model can be used to create a dose-painting by numbers plan which maximizes the TCP. It could be shown, that such a heterogeneous dose distribution is technically feasible. Copyright © 2012. Published by Elsevier GmbH.
Shuttle/TDRSS Ku-band downlink study
NASA Technical Reports Server (NTRS)
Meyer, R.
1976-01-01
Assessing the adequacy of the baseline signal design approach, developing performance specifications for the return link hardware, and performing detailed design and parameter optimization tasks was accomplished by completing five specific study tasks. The results of these tasks show that the basic signal structure design is sound and that the goals can be met. Constraints placed on return link hardware by this structure allow reasonable specifications to be written so that no extreme technical risk areas in equipment design are foreseen. A third channel can be added to the PM mode without seriously degrading the other services. The feasibility of using only a PM mode was shown to exist, however, this will require use of some digital TV transmission techniques. Each task and its results are summarized.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
Hidden Markov model analysis of force/torque information in telemanipulation
NASA Technical Reports Server (NTRS)
Hannaford, Blake; Lee, Paul
1991-01-01
A model for the prediction and analysis of sensor information recorded during robotic performance of telemanipulation tasks is presented. The model uses the hidden Markov model to describe the task structure, the operator's or intelligent controller's goal structure, and the sensor signals. A methodology for constructing the model parameters based on engineering knowledge of the task is described. It is concluded that the model and its optimal state estimation algorithm, the Viterbi algorithm, are very succesful at the task of segmenting the data record into phases corresponding to subgoals of the task. The model provides a rich modeling structure within a statistical framework, which enables it to represent complex systems and be robust to real-world sensory signals.
Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations
NASA Astrophysics Data System (ADS)
Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong
2017-01-01
Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.
Dynamic nuclear polarization using frequency modulation at 3.34 T.
Hovav, Y; Feintuch, A; Vega, S; Goldfarb, D
2014-01-01
During dynamic nuclear polarization (DNP) experiments polarization is transferred from unpaired electrons to their neighboring nuclear spins, resulting in dramatic enhancement of the NMR signals. While in most cases this is achieved by continuous wave (cw) irradiation applied to samples in fixed external magnetic fields, here we show that DNP enhancement of static samples can improve by modulating the microwave (MW) frequency at a constant field of 3.34 T. The efficiency of triangular shaped modulation is explored by monitoring the (1)H signal enhancement in frozen solutions containing different TEMPOL radical concentrations at different temperatures. The optimal modulation parameters are examined experimentally and under the most favorable conditions a threefold enhancement is obtained with respect to constant frequency DNP in samples with low radical concentrations. The results are interpreted using numerical simulations on small spin systems. In particular, it is shown experimentally and explained theoretically that: (i) The optimal modulation frequency is higher than the electron spin-lattice relaxation rate. (ii) The optimal modulation amplitude must be smaller than the nuclear Larmor frequency and the EPR line-width, as expected. (iii) The MW frequencies corresponding to the enhancement maxima and minima are shifted away from one another when using frequency modulation, relative to the constant frequency experiments. Copyright © 2013 Elsevier Inc. All rights reserved.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Optimization measurement of muscle oxygen saturation under isometric studies using FNIRS
NASA Astrophysics Data System (ADS)
Halim, A. A. A.; Laili, M. H.; Salikin, M. S.; Rusop, M.
2018-05-01
Development of functional near infrared spectroscopy (fNIRS) technologies has advanced quantification signal using multiple wavelength and detector to investigate hemodynamic response in human muscle. These non-invasive technologies have been widely used to solve the propagation of light inside the tissues including the absorption, scattering coefficient and to quantify the oxygenation level of haemoglobin and myoglobin in human muscle. The goal of this paper is to optimize the measurement of muscle oxygen saturation during isometric exercise using functional near infrared spectroscopy (fNIRS). The experiment was carried out on 15 sedentary healthy male volunteers. All volunteers are required to perform an isometric exercise at three assessment of muscular fatigue's level on flexor digitalis (FDS) muscle in the human forearm using fNIRS. The slopes of the signals have been highlighted to evaluate the muscle oxygen saturation of regional muscle fatigue. As a result, oxygen saturation slope from 10% exercise showed steeper than the first assessment at 30%-50% of fatigues level. The hemodynamic signal response showed significant value (p=0.04) at all three assessment of muscular fatigue's level which produce a p-value (p<0.05) measured by fNIRS. Thus, this highlighted parameter could be used to estimate fatigue's level of human and could open other possibilities to study muscle performance diagnosis.
Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits. PMID:25097891
Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo
2018-04-18
Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.
Assessment of precursory information in seismo-electromagnetic phenomena
NASA Astrophysics Data System (ADS)
Han, P.; Hattori, K.; Zhuang, J.
2017-12-01
Previous statistical studies showed that there were correlations between seismo-electromagnetic phenomena and sizeable earthquakes in Japan. In this study, utilizing Molchan's error diagram, we evaluate whether these phenomena contain precursory information and discuss how they can be used in short-term forecasting of large earthquake events. In practice, for given series of precursory signals and related earthquake events, each prediction strategy is characterized by the leading time of alarms, the length of alarm window, the alarm radius (area) and magnitude. The leading time is the time length between a detected anomaly and its following alarm, and the alarm window is the duration that an alarm lasts. The alarm radius and magnitude are maximum predictable distance and minimum predictable magnitude of earthquake events, respectively. We introduce the modified probability gain (PG') and the probability difference (D') to quantify the forecasting performance and to explore the optimal prediction parameters for a given electromagnetic observation. The above methodology is firstly applied to ULF magnetic data and GPS-TEC data. The results show that the earthquake predictions based on electromagnetic anomalies are significantly better than random guesses, indicating the data contain potential useful precursory information. Meanwhile, we reveal the optimal prediction parameters for both observations. The methodology proposed in this study could be also applied to other pre-earthquake phenomena to find out whether there is precursory information, and then on this base explore the optimal alarm parameters in practical short-term forecast.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Noninvasive extraction of fetal electrocardiogram based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Fu, Yumei; Xiang, Shihan; Chen, Tianyi; Zhou, Ping; Huang, Weiyan
2015-10-01
The fetal electrocardiogram (FECG) signal has important clinical value for diagnosing the fetal heart diseases and choosing suitable therapeutics schemes to doctors. So, the noninvasive extraction of FECG from electrocardiogram (ECG) signals becomes a hot research point. A new method, the Support Vector Machine (SVM) is utilized for the extraction of FECG with limited size of data. Firstly, the theory of the SVM and the principle of the extraction based on the SVM are studied. Secondly, the transformation of maternal electrocardiogram (MECG) component in abdominal composite signal is verified to be nonlinear and fitted with the SVM. Then, the SVM is trained, and the training results are compared with the real data to ensure the effect of the training. Meanwhile, the parameters of the SVM are optimized to achieve the best performance so that the learning machine can be utilized to fit the unknown samples. Finally, the FECG is extracted by removing the optimal estimation of MECG component from the abdominal composite signal. In order to evaluate the performance of FECG extraction based on the SVM, the Signal-to-Noise Ratio (SNR) and the visual test are used. The experimental results show that the FECG with good quality can be extracted, its SNR ratio is significantly increased as high as 9.2349 dB and the time cost is significantly decreased as short as 0.802 seconds. Compared with the traditional method, the noninvasive extraction method based on the SVM has a simple realization, the shorter treatment time and the better extraction quality under the same conditions.
Prospective PET image quality gain calculation method by optimizing detector parameters.
Theodorakis, Lampros; Loudos, George; Prassopoulos, Vasilios; Kappas, Constantine; Tsougos, Ioannis; Georgoulias, Panagiotis
2015-12-01
Lutetium-based scintillators with high-performance electronics introduced time-of-flight (TOF) reconstruction in the clinical setting. Let G' be the total signal to noise ratio gain in a reconstructed image using the TOF kernel compared with conventional reconstruction modes. G' is then the product of G1 gain arising from the reconstruction process itself and (n-1) other gain factors (G2, G3, … Gn) arising from the inherent properties of the detector. We calculated G2 and G3 gains resulting from the optimization of the coincidence and energy window width for prompts and singles, respectively. Both quantitative and image-based validated Monte Carlo models of Lu2SiO5 (LSO) TOF-permitting and Bi4Ge3O12 (BGO) TOF-nonpermitting detectors were used for the calculations. G2 and G3 values were 1.05 and 1.08 for the BGO detector and G3 was 1.07 for the LSO. A value of almost unity for G2 of the LSO detector indicated a nonsignificant optimization by altering the energy window setting. G' was found to be ∼1.4 times higher for the TOF-permitting detector after reconstruction and optimization of the coincidence and energy windows. The method described could potentially predict image noise variations by altering detector acquisition parameters. It could also further contribute toward a long-lasting debate related to cost-efficiency issues of TOF scanners versus the non-TOF ones. Some vendors re-engage nowadays to non-TOF product line designs in an effort to reduce crystal costs. Therefore, exploring the limits of image quality gain by altering the parameters of these detectors remains a topical issue.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Liu, Yaxi; Gao, Zongjun; Wu, Ri; Wang, Zhenhua; Chen, Xiangfeng; Chan, T-W Dominic
2017-01-06
In this work, magnetic porous carbon material derived from a bimetallic metal-organic framework was explored as an adsorbent for magnetic solid-phase extraction of organochlorine pesticides (OCPs). The synthesized porous carbon possessed a high specific surface area and magnetization saturation. The OCPs in the samples were quantified using gas chromatography coupled with a triple quadrupole mass spectrometer. The experimental parameters, including the desorption solvent and conditions, amount of adsorbent, extraction time, extraction temperature, and ionic strength of the solution, were optimized. Under optimal conditions, the developed method displayed good linearity (r>0.99) within the concentration range of 2-500ngL -1 . Low limits of detection (0.39-0.70ngL -1 , signal-to-noise ratio=3:1) and limits of quantification (1.45-2.0ngL -1 , signal-to-noise ratio=10:1) as well as good precision (relative standard deviation<10%) were also obtained. The developed method was applied in the analysis of OCPs in drinking and environmental water samples. Copyright © 2016 Elsevier B.V. All rights reserved.