Sample records for filtered-x lms algorithm

  1. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  2. Simulation for noise cancellation using LMS adaptive filter

    NASA Astrophysics Data System (ADS)

    Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung

    2017-06-01

    In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.

  3. Improving the Response of Accelerometers for Automotive Applications by Using LMS Adaptive Filters: Part II

    PubMed Central

    Hernandez, Wilmar; de Vicente, Jesús; Sergiyenko, Oleg Y.; Fernández, Eduardo

    2010-01-01

    In this paper, the fast least-mean-squares (LMS) algorithm was used to both eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications, and improve the convergence rate of the filtering process based on the conventional LMS algorithm. The response of the accelerometer under test was corrupted by process and measurement noise, and the signal processing stage was carried out by using both conventional filtering, which was already shown in a previous paper, and optimal adaptive filtering. The adaptive filtering process relied on the LMS adaptive filtering family, which has shown to have very good convergence and robustness properties, and here a comparative analysis between the results of the application of the conventional LMS algorithm and the fast LMS algorithm to solve a real-life filtering problem was carried out. In short, in this paper the piezoresistive accelerometer was tested for a multi-frequency acceleration excitation. Due to the kind of test conducted in this paper, the use of conventional filtering was discarded and the choice of one adaptive filter over the other was based on the signal-to-noise ratio improvement and the convergence rate. PMID:22315579

  4. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  5. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  6. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  7. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  8. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  9. IIR filtering based adaptive active vibration control methodology with online secondary path modeling using PZT actuators

    NASA Astrophysics Data System (ADS)

    Boz, Utku; Basdogan, Ipek

    2015-12-01

    Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.

  10. The Least Mean Squares Adaptive FIR Filter for Narrow-Band RFI Suppression in Radio Detection of Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Głas, Dariusz

    2017-06-01

    Radio emission from the extensive air showers (EASs), initiated by ultrahigh-energy cosmic rays, was theoretically suggested over 50 years ago. However, due to technical limitations, successful collection of sufficient statistics can take several years. Nowadays, this detection technique is used in many experiments consisting in studying EAS. One of them is the Auger Engineering Radio Array (AERA), located within the Pierre Auger Observatory. AERA focuses on the radio emission, generated by the electromagnetic part of the shower, mainly in geomagnetic and charge excess processes. The frequency band observed by AERA radio stations is 30-80 MHz. Thus, the frequency range is contaminated by human-made and narrow-band radio frequency interferences (RFIs). Suppression of contaminations is very important to lower the rate of spurious triggers. There are two kinds of digital filters used in AERA radio stations to suppress these contaminations: the fast Fourier transform median filter and four narrow-band IIR-notch filters. Both filters have worked successfully in the field for many years. An adaptive filter based on a least mean squares (LMS) algorithm is a relatively simple finite impulse response (FIR) filter, which can be an alternative for currently used filters. Simulations in MATLAB are very promising and show that the LMS filter can be very efficient in suppressing RFI and only slightly distorts radio signals. The LMS algorithm was implemented into a Cyclone V field programmable gate array for testing the stability, RFI suppression efficiency, and adaptation time to new conditions. First results show that the FIR filter based on the LMS algorithm can be successfully implemented and used in real AERA radio stations.

  11. Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1998-01-01

    This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.

  12. Active Control of Wind Tunnel Noise

    NASA Technical Reports Server (NTRS)

    Hollis, Patrick (Principal Investigator)

    1991-01-01

    The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.

  13. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  14. Efficient block processing of long duration biotelemetric brain data for health care monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soumya, I.; Zia Ur Rahman, M., E-mail: mdzr-5@ieee.org; Rama Koti Reddy, D. V.

    In real time clinical environment, the brain signals which doctor need to analyze are usually very long. Such a scenario can be made simple by partitioning the input signal into several blocks and applying signal conditioning. This paper presents various block based adaptive filter structures for obtaining high resolution electroencephalogram (EEG) signals, which estimate the deterministic components of the EEG signal by removing noise. To process these long duration signals, we propose Time domain Block Least Mean Square (TDBLMS) algorithm for brain signal enhancement. In order to improve filtering capability, we introduce normalization in the weight update recursion of TDBLMS,more » which results TD-B-normalized-least mean square (LMS). To increase accuracy and resolution in the proposed noise cancelers, we implement the time domain cancelers in frequency domain which results frequency domain TDBLMS and FD-B-Normalized-LMS. Finally, we have applied these algorithms on real EEG signals obtained from human using Emotive Epoc EEG recorder and compared their performance with the conventional LMS algorithm. The results show that the performance of the block based algorithms is superior to the LMS counter-parts in terms of signal to noise ratio, convergence rate, excess mean square error, misadjustment, and coherence.« less

  15. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  16. Adaptive Identification and Control of Flow-Induced Cavity Oscillations

    NASA Technical Reports Server (NTRS)

    Kegerise, M. A.; Cattafesta, L. N.; Ha, C.

    2002-01-01

    Progress towards an adaptive self-tuning regulator (STR) for the cavity tone problem is discussed in this paper. Adaptive system identification algorithms were applied to an experimental cavity-flow tested as a prerequisite to control. In addition, a simple digital controller and a piezoelectric bimorph actuator were used to demonstrate multiple tone suppression. The control tests at Mach numbers of 0.275, 0.40, and 0.60 indicated approx. = 7dB tone reductions at multiple frequencies. Several different adaptive system identification algorithms were applied at a single freestream Mach number of 0.275. Adaptive finite-impulse response (FIR) filters of orders up to N = 100 were found to be unsuitable for modeling the cavity flow dynamics. Adaptive infinite-impulse response (IIR) filters of comparable order better captured the system dynamics. Two recursive algorithms, the least-mean square (LMS) and the recursive-least square (RLS), were utilized to update the adaptive filter coefficients. Given the sample-time requirements imposed by the cavity flow dynamics, the computational simplicity of the least mean squares (LMS) algorithm is advantageous for real-time control.

  17. Application of adaptive filters in denoising magnetocardiogram signals

    NASA Astrophysics Data System (ADS)

    Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.

    2017-05-01

    Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.

  18. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  19. Full Gradient Solution to Adaptive Hybrid Control

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Schiller, Noah H.; Fuller, Chris

    2017-01-01

    This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.

  20. Full Gradient Solution to Adaptive Hybrid Control

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Schiller, Noah H.; Fuller, Chris

    2016-01-01

    This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered-reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.

  1. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani

    2015-05-01

    For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.

  2. Improving the Response of Accelerometers for Automotive Applications by Using LMS Adaptive Filters

    PubMed Central

    Hernandez, Wilmar; de Vicente, Jesús; Sergiyenko, Oleg; Fernández, Eduardo

    2010-01-01

    In this paper, the least-mean-squares (LMS) algorithm was used to eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications. This kind of accelerometer is designed to be easily mounted in hard to reach places on vehicles under test, and they usually feature ranges from 50 to 2,000 g (where is the gravitational acceleration, 9.81 m/s2) and frequency responses to 3,000 Hz or higher, with DC response, durable cables, reliable performance and relatively low cost. However, here we show that the response of the sensor under test had a lot of noise and we carried out the signal processing stage by using both conventional and optimal adaptive filtering. Usually, designers have to build their specific analog and digital signal processing circuits, and this fact increases considerably the cost of the entire sensor system and the results are not always satisfactory, because the relevant signal is sometimes buried in a broad-band noise background where the unwanted information and the relevant signal sometimes share a very similar frequency band. Thus, in order to deal with this problem, here we used the LMS adaptive filtering algorithm and compare it with others based on the kind of filters that are typically used for automotive applications. The experimental results are satisfactory. PMID:22315542

  3. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  4. An Adaptive Pheromone Updation of the Ant-System using LMS Technique

    NASA Astrophysics Data System (ADS)

    Paul, Abhishek; Mukhopadhyay, Sumitra

    2010-10-01

    We propose a modified model of pheromone updation for Ant-System, entitled as Adaptive Ant System (AAS), using the properties of basic Adaptive Filters. Here, we have exploited the properties of Least Mean Square (LMS) algorithm for the pheromone updation to find out the best minimum tour for the Travelling Salesman Problem (TSP). TSP library has been used for the selection of benchmark problem and the proposed AAS determines the minimum tour length for the problems containing large number of cities. Our algorithm shows effective results and gives least tour length in most of the cases as compared to other existing approaches.

  5. Adaptive Reception for Underwater Communications

    DTIC Science & Technology

    2011-06-01

    Experimental results prove the effectiveness of the receiver. 14. SUBJECT TERMS Underwater acoustic communications, adaptive algorithms , Kalman filter...the update algorithm design and the value of the spatial diversity are addressed. In this research, an adaptive multichannel equalizer made up of a...for the time-varying nature of the channel is to use an Adaptive Decision Feedback Equalizer based on either the RLS or LMS algorithm . Although this

  6. Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.

    PubMed

    Milani, Ali A; Panahi, Issa M; Briggs, Richard

    2007-01-01

    Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.

  7. Development of Na Adaptive Filter to Estimate the Percentage of Body Fat Based on Anthropometric Measures

    NASA Astrophysics Data System (ADS)

    do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo

    2018-01-01

    This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.

  8. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.

  9. [Increase in the effectiveness of identifying peaks and feet of the photoplethysmographic pulse to be reconstructed it using adaptive filtering].

    PubMed

    Becerra-Luna, Brayans; Martínez-Memije, Raúl; Cartas-Rosado, Raúl; Infante-Vázquez, Oscar

    To improve the identification of peaks and feet in photoplethysmographic (PPG) pulses deformed by myokinetic noise, through the implementation of a modified fingertip and applying adaptive filtering. PPG signals were recordedfrom 10 healthy volunteers using two photoplethysmography systems placed on the index finger of each hand. Recordings lasted three minutes andwere done as follows: during the first minute, both handswere at rest, and for the lasting two minutes only the left hand was allowed to make quasi-periodicmovementsin order to add myokinetic noise. Two methodologies were employed to process the signals off-line. One consisted on using an adaptive filter based onthe Least Mean Square (LMS) algorithm, and the other includeda preprocessing stage in addition to the same LMS filter. Both filtering methods were compared and the one with the lowest error was chosen to assess the improvement in the identification of peaks and feet from PPG pulses. Average percentage errorsobtained wereof 22.94% with the first filtering methodology, and 3.72% withthe second one. On identifying peaks and feet from PPG pulsesbefore filtering, error percentages obtained were of 24.26% and 48.39%, respectively, and once filtered error percentageslowered to 2.02% for peaks and 3.77% for feet. The attenuation of myokinetic noise in PPG pulses through LMS filtering, plusa preprocessing stage, allows increasingthe effectiveness onthe identification of peaks and feet from PPG pulses, which are of great importance for medical assessment. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.

  10. Flight Test of ASAC Aircraft Interior Noise Control System

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran; Cline, John; Sullivan, Brenda

    1999-01-01

    A flight test is described in which an active structural/acoustic control system reduces turboprop induced interior noise on a Raytheon Aircraft Company 1900D airliner. Control inputs to 21 inertial force actuators were computed adaptively using a transform domain version of the multichannel filtered-X LMS algorithm to minimize the mean square response of 32 microphones. A combinatorial search algorithm was employed to optimize placement of the force actuators on the aircraft frame. Both single frequency and multi-frequency results are presented. Reductions of up to 15 dB were obtained at the blade passage frequency (BPF) during single frequency control tests. Simultaneous reductions of the BPF and next 2 harmonics of 10 dB, 2.5 dB and 3.0 dB, were obtained in a multi-frequency test.

  11. A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

    PubMed Central

    Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412

  12. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.

  13. Hybrid feedforward-feedback active noise reduction for hearing protection and communication.

    PubMed

    Ray, Laura R; Solbeck, Jason A; Streeter, Alexander D; Collier, Robert D

    2006-10-01

    A hybrid active noise reduction (ANR) architecture is presented and validated for a circumaural earcup and a communication earplug. The hybrid system combines source-independent feedback ANR with a Lyapunov-tuned leaky LMS filter (LyLMS) improving gain stability margins over feedforward ANR alone. In flat plate testing, the earcup demonstrates an overall C-weighted total noise reduction of 40 dB and 30-32 dB, respectively, for 50-800 Hz sum-of-tones noise and for aircraft or helicopter cockpit noise, improving low frequency (<100 Hz) performance by up to 15 dB over either control component acting individually. For the earplug, a filtered-X implementation of the LyLMS accommodates its nonconstant cancellation path gain. A fast time-domain identification method provides a high-fidelity, computationally efficient, infinite impulse response cancellation path model, which is used for both the filtered-X implementation and communication feedthrough. Insertion loss measurements made with a manikin show overall C-weighted total noise reduction provided by the ANR earplug of 46-48 dB for sum-of-tones 80-2000 Hz and 40-41 dB from 63 to 3000 Hz for UH-60 helicopter noise, with negligible degradation in attenuation during speech communication. For both hearing protectors, a stability metric improves by a factor of 2 to several orders of magnitude through hybrid ANR.

  14. Fusion of KLMS and blob based pre-screener for buried landmine detection using ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Baydar, Bora; Akar, Gözde Bozdaǧi.; Yüksel, Seniha E.; Öztürk, Serhat

    2016-05-01

    In this paper, a decision level fusion using multiple pre-screener algorithms is proposed for the detection of buried landmines from Ground Penetrating Radar (GPR) data. The Kernel Least Mean Square (KLMS) and the Blob Filter pre-screeners are fused together to work in real time with less false alarms and higher true detection rates. The effect of the kernel variance is investigated for the KLMS algorithm. Also, the results of the KLMS and KLMS+Blob filter algorithms are compared to the LMS method in terms of processing time and false alarm rates. Proposed algorithm is tested on both simulated data and real data collected at the field of IPA Defence at METU, Ankara, Turkey.

  15. Volterra series based blind equalization for nonlinear distortions in short reach optical CAP system

    NASA Astrophysics Data System (ADS)

    Tao, Li; Tan, Hui; Fang, Chonghua; Chi, Nan

    2016-12-01

    In this paper, we propose a blind Volterra series based nonlinear equalization (VNLE) with low complexity for the nonlinear distortion mitigation in short reach optical carrierless amplitude and phase (CAP) modulation system. The principle of the blind VNLE is presented and the performance of its blind adaptive algorithms including the modified cascaded multi-mode algorithm (MCMMA) and direct detection LMS (DD-LMS) are investigated experimentally. Compared to the conventional VNLE using training symbols before demodulation, it is performed after matched filtering and downsampling, so shorter memory length is required but similar performance improvement is observed. About 1 dB improvement is observed at BER of 3.8×10-3 for 40 Gb/s CAP32 signal over 40 km standard single mode fiber.

  16. Experimental evaluation of leaky least-mean-square algorithms for active noise reduction in communication headsets.

    PubMed

    Cartes, David A; Ray, Laura R; Collier, Robert D

    2002-04-01

    An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. ["Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection," Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.

  17. Adaptive Channel Measurement Study

    DTIC Science & Technology

    1975-09-01

    of P 3 as a Function of Step Size and Iteration Number With and Without Noise Using the LMS Algorithm and a Quadratic Model at a -Fade...real, al(t) will vanish, and the linear term 0,(t) is a filtered version of the input signal with a filter identical to the lowpass equivalent of the...we see tnat a (t) +ij(t) -n Il+ ’n] - - + ..- (2.71) 2-49 Collecting terms of the same order 0(t) + JO(t) ,,

  18. A Novel Modulation Classification Approach Using Gabor Filter Network

    PubMed Central

    Ghauri, Sajjad Ahmed; Qureshi, Ijaz Mansoor; Cheema, Tanveer Ahmed; Malik, Aqdas Naveed

    2014-01-01

    A Gabor filter network based approach is used for feature extraction and classification of digital modulated signals by adaptively tuning the parameters of Gabor filter network. Modulation classification of digitally modulated signals is done under the influence of additive white Gaussian noise (AWGN). The modulations considered for the classification purpose are PSK 2 to 64, FSK 2 to 64, and QAM 4 to 64. The Gabor filter network uses the network structure of two layers; the first layer which is input layer constitutes the adaptive feature extraction part and the second layer constitutes the signal classification part. The Gabor atom parameters are tuned using Delta rule and updating of weights of Gabor filter using least mean square (LMS) algorithm. The simulation results show that proposed novel modulation classification algorithm has high classification accuracy at low signal to noise ratio (SNR) on AWGN channel. PMID:25126603

  19. Dual Fine Tracking Control of a Satellite Laser Communication Uplink

    DTIC Science & Technology

    2006-09-14

    rejec- tion results for LQG control compared with adaptive least mean squares (LMS) and gradient adaptive lattice (GAL) algorithms , however, both...period [7, page 256]. The steady-state Kalman filter, defined by the predictor / corrector form, is implemented for each beam respectively as [7, page...Disturbance Environment . . . . . . . . . . . . . . . . . 97 B.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Appendix C . Aircraft

  20. A comparative evaluation of adaptive noise cancellation algorithms for minimizing motion artifacts in a forehead-mounted wearable pulse oximeter.

    PubMed

    Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush

    2007-01-01

    Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.

  1. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  2. Study of Interpolated Timing Recovery Phase-Locked Loop with Linearly Constrained Adaptive Prefilter for Higher-Density Optical Disc

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu

    2009-03-01

    A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.

  3. A generalized leaky FxLMS algorithm for tuning the waterbed effect of feedback active noise control systems

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Qiu, Xiaojun; Guo, Yecai

    2018-06-01

    To tune the noise amplification in the feedback system caused by the waterbed effect effectively, an adaptive algorithm is proposed in this paper by replacing the scalar leaky factor of the leaky FxLMS algorithm with a real symmetric Toeplitz matrix. The elements in the matrix are calculated explicitly according to the noise amplification constraints, which are defined based on a simple but efficient method. Simulations in an ANC headphone application demonstrate that the proposed algorithm can adjust the frequency band of noise amplification more effectively than the FxLMS algorithm and the leaky FxLMS algorithm.

  4. Students academic performance based on behavior

    NASA Astrophysics Data System (ADS)

    Maulida, Juwita Dien; Kariyam

    2017-12-01

    Utilization of data in an information system that can be used for decision making that utilizes existing data warehouse to help dig useful information to make decisions correctly and accurately. Experience API (xAPI) is one of the enabling technologies for collecting data, so xAPI can be used as a data warehouse that can be used for various needs. One software application whose data is collected in xAPI is LMS. LMS is a software used in an electronic learning process that can handle all aspects of learning, by using LMS can also be known how the learning process and the aspects that can affect learning achievement. One of the aspects that can affect the learning achievement is the background of each student, which is not necessarily the student with a good background is an outstanding student or vice versa. Therefore, an action is needed to anticipate this problem. Prediction of student academic performance using Naive Bayes algorithm obtained accuracy of 67.7983% and error 32.2917%.

  5. An active structural acoustic control approach for the reduction of the structure-borne road noise

    NASA Astrophysics Data System (ADS)

    Douville, Hugo; Berry, Alain; Masson, Patrice

    2002-11-01

    The reduction of the structure-borne road noise generated inside the cabin of an automobile is investigated using an Active Structural Acoustic Control (ASAC) approach. First, a laboratory test bench consisting of a wheel/suspension/lower suspension A-arm assembly has been developed in order to identify the vibroacoustic transfer paths (up to 250 Hz) for realistic road noise excitation of the wheel. Frequency Response Function (FRF) measurements between the excitation/control actuators and each suspension/chassis linkage are used to characterize the different transfer paths that transmit energy through the chassis of the car. Second, a FE/BE model (Finite/Boundary Elements) was developed to simulate the acoustic field of an automobile cab interior. This model is used to predict the acoustic field inside the cabin as a response to the measured forces applied on the suspension/chassis linkages. Finally, an experimental implementation of ASAC is presented. The control approach relies on the use of inertial actuators to modify the vibration behavior of the suspension and the automotive chassis such that its noise radiation efficiency is decreased. The implemented algorithm consists of a MIMO (Multiple-Input-Multiple-Output) feedforward configuration with a filtered-X LMS algorithm using an advanced reference signal (width FIR filters) using the Simulink/Dspace environment for control prototyping.

  6. Performance Evaluation of Multichannel Adaptive Algorithms for Local Active Noise Control

    NASA Astrophysics Data System (ADS)

    DE DIEGO, M.; GONZALEZ, A.

    2001-07-01

    This paper deals with the development of a multichannel active noise control (ANC) system inside an enclosed space. The purpose is to design a real practical system which works well in local ANC applications. Moreover, the algorithm implemented in the adaptive controller should be robust, of low computational complexity and it should manage to generate a uniform useful-size zone of quite in order to allow the head motion of a person seated on a seat inside a car. Experiments were carried out under semi-anechoic and listening room conditions to verify the successful implementation of the multichannel system. The developed prototype consists of an array of up to four microphones used as error sensors mounted on the headrest of a seat place inside the enclosure. One loudspeaker was used as single primary source and two secondary sources were placed facing the seat. The aim of this multichannel system is to reduce the sound pressure levels in an area around the error sensors, following a local control strategy. When using this technique, the cancellation points are not only the error sensor positions but an area around them, which is measured by using a monitoring microphone. Different multichannel adaptive algorithms for ANC have been analyzed and their performance verified. Multiple error algorithms are used in order to cancel out different types of primary noise (engine noise and random noise) with several configurations (up to four channels system). As an alternative to the multiple error LMS algorithm (multichannel version of the filtered-X LMS algorithm, MELMS), the least maximum mean squares (LMMS) and the scanning error-LMS algorithm have been developed in this work in order to reduce computational complexity and achieve a more uniform residual field. The ANC algorithms were programmed on a digital signal processing board equipped with a TMS320C40 floating point DSP processor. Measurements concerning real-time experiments on local noise reduction in two environments and at frequencies below 230 Hz are presented. Better noise levels attenuation is obtained in the semianechoic chamber due to the simplicity of the acoustic field. The size of the zone of quiet makes the system useful at relatively low frequencies and it is large enough to cover a listener's head movements. The spatial extent of the zones of quiet is generally observed to increase as the error sensors are moved away from the secondary source, they are put closer together or its number increases. In summary, different algorithms' performance and the viability of the multichannel system for local active noise control in real listening conditions are evaluated and some guidelines for designing such systems are then proposed.

  7. Active vibration suppression of self-excited structures using an adaptive LMS algorithm

    NASA Astrophysics Data System (ADS)

    Danda Roy, Indranil

    The purpose of this investigation is to study the feasibility of an adaptive feedforward controller for active flutter suppression in representative linear wing models. The ability of the controller to suppress limit-cycle oscillations in wing models having root springs with freeplay nonlinearities has also been studied. For the purposes of numerical simulation, mathematical models of a rigid and a flexible wing structure have been developed. The rigid wing model is represented by a simple three-degree-of-freedom airfoil while the flexible wing is modelled by a multi-degree-of-freedom finite element representation with beam elements for bending and rod elements for torsion. Control action is provided by one or more flaps attached to the trailing edge and extending along the entire wing span for the rigid model and a fraction of the wing span for the flexible model. Both two-dimensional quasi-steady aerodynamics and time-domain unsteady aerodynamics have been used to generate the airforces in the wing models. An adaptive feedforward controller has been designed based on the filtered-X Least Mean Squares (LMS) algorithm. The control configuration for the rigid wing model is single-input single-output (SISO) while both SISO and multi-input multi-output (MIMO) configurations have been applied on the flexible wing model. The controller includes an on-line adaptive system identification scheme which provides the LMS controller with a reasonably accurate model of the plant. This enables the adaptive controller to track time-varying parameters in the plant and provide effective control. The wing models in closed-loop exhibit highly damped responses at airspeeds where the open-loop responses are destructive. Simulations with the rigid and the flexible wing models in a time-varying airstream show a 63% and 53% increase, respectively, over their corresponding open-loop flutter airspeeds. The ability of the LMS controller to suppress wing store flutter in the two models has also been investigated. With 10% measurement noise introduced in the flexible wing model, the controller demonstrated good robustness to the extraneous disturbances. In the examples studied it is found that adaptation is rapid enough to successfully control flutter at accelerations in the airstream of up to 15 ft/sec2 for the rigid wing model and 9 ft/sec2 for the flexible wing model.

  8. Design of adaptive control systems by means of self-adjusting transversal filters

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.

    1986-01-01

    The design of closed-loop adaptive control systems based on nonparametric identification was addressed. Implementation is by self-adjusting Least Mean Square (LMS) transversal filters. The design concept is Model Reference Adaptive Control (MRAC). Major issues are to preserve the linearity of the error equations of each LMS filter, and to prevent estimation bias that is due to process or measurement noise, thus providing necessary conditions for the convergence and stability of the control system. The controlled element is assumed to be asymptotically stable and minimum phase. Because of the nonparametric Finite Impulse Response (FIR) estimates provided by the LMS filters, a-priori information on the plant model is needed only in broad terms. Following a survey of control system configurations and filter design considerations, system implementation is shown here in Single Input Single Output (SISO) format which is readily extendable to multivariable forms. In extensive computer simulation studies the controlled element is represented by a second-order system with widely varying damping, natural frequency, and relative degree.

  9. Active vibration control of a full scale aircraft wing using a reconfigurable controller

    NASA Astrophysics Data System (ADS)

    Prakash, Shashikala; Renjith Kumar, T. G.; Raja, S.; Dwarakanathan, D.; Subramani, H.; Karthikeyan, C.

    2016-01-01

    This work highlights the design of a Reconfigurable Active Vibration Control (AVC) System for aircraft structures using adaptive techniques. The AVC system with a multichannel capability is realized using Filtered-X Least Mean Square algorithm (FxLMS) on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) platform in Very High Speed Integrated Circuits Hardware Description Language, (VHDL). The HDL design is made based on Finite State Machine (FSM) model with Floating point Intellectual Property (IP) cores for arithmetic operations. The use of FPGA facilitates to modify the system parameters even during runtime depending on the changes in user's requirements. The locations of the control actuators are optimized based on dynamic modal strain approach using genetic algorithm (GA). The developed system has been successfully deployed for the AVC testing of the full-scale wing of an all composite two seater transport aircraft. Several closed loop configurations like single channel and multi-channel control have been tested. The experimental results from the studies presented here are very encouraging. They demonstrate the usefulness of the system's reconfigurability for real time applications.

  10. Application of least mean square algorithm to suppression of maglev track-induced self-excited vibration

    NASA Astrophysics Data System (ADS)

    Zhou, D. F.; Li, J.; Hansen, C. H.

    2011-11-01

    Track-induced self-excited vibration is commonly encountered in EMS (electromagnetic suspension) maglev systems, and a solution to this problem is important in enabling the commercial widespread implementation of maglev systems. Here, the coupled model of the steel track and the magnetic levitation system is developed, and its stability is investigated using the Nyquist criterion. The harmonic balance method is employed to investigate the stability and amplitude of the self-excited vibration, which provides an explanation of the phenomenon that track-induced self-excited vibration generally occurs at a specified amplitude and frequency. To eliminate the self-excited vibration, an improved LMS (Least Mean Square) cancellation algorithm with phase correction (C-LMS) is employed. The harmonic balance analysis shows that the C-LMS cancellation algorithm can completely suppress the self-excited vibration. To achieve adaptive cancellation, a frequency estimator similar to the tuner of a TV receiver is employed to provide the C-LMS algorithm with a roughly estimated reference frequency. Numerical simulation and experiments undertaken on the CMS-04 vehicle show that the proposed adaptive C-LMS algorithm can effectively eliminate the self-excited vibration over a wide frequency range, and that the robustness of the algorithm suggests excellent potential for application to EMS maglev systems.

  11. Artifact removal from EEG signals using adaptive filters in cascade

    NASA Astrophysics Data System (ADS)

    Garcés Correa, A.; Laciar, E.; Patiño, H. D.; Valentinuzzi, M. E.

    2007-11-01

    Artifacts in EEG (electroencephalogram) records are caused by various factors, like line interference, EOG (electro-oculogram) and ECG (electrocardiogram). These noise sources increase the difficulty in analyzing the EEG and to obtaining clinical information. For this reason, it is necessary to design specific filters to decrease such artifacts in EEG records. In this paper, a cascade of three adaptive filters based on a least mean squares (LMS) algorithm is proposed. The first one eliminates line interference, the second adaptive filter removes the ECG artifacts and the last one cancels EOG spikes. Each stage uses a finite impulse response (FIR) filter, which adjusts its coefficients to produce an output similar to the artifacts present in the EEG. The proposed cascade adaptive filter was tested in five real EEG records acquired in polysomnographic studies. In all cases, line-frequency, ECG and EOG artifacts were attenuated. It is concluded that the proposed filter reduces the common artifacts present in EEG signals without removing significant information embedded in these records.

  12. A Two-Stage Approach for Improving the Convergence of Least-Mean-Square Adaptive Decision-Feedback Equalizers in the Presence of Severe Narrowband Interference

    NASA Astrophysics Data System (ADS)

    Batra, Arun; Zeidler, James R.; Beex, A. A. Louis

    2007-12-01

    It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.

  13. A Hybrid Positioning Strategy for Vehicles in a Tunnel Based on RFID and In-Vehicle Sensors

    PubMed Central

    Song, Xiang; Li, Xu; Tang, Wencheng; Zhang, Weigong; Li, Bin

    2014-01-01

    Many intelligent transportation system applications require accurate, reliable, and continuous vehicle positioning. How to achieve such positioning performance in extended GPS-denied environments such as tunnels is the main challenge for land vehicles. This paper proposes a hybrid multi-sensor fusion strategy for vehicle positioning in tunnels. First, the preliminary positioning algorithm is developed. The Radio Frequency Identification (RFID) technology is introduced to achieve preliminary positioning in the tunnel. The received signal strength (RSS) is used as an indicator to calculate the distances between the RFID tags and reader, and then a Least Mean Square (LMS) federated filter is designed to provide the preliminary position information for subsequent global fusion. Further, to improve the positioning performance in the tunnel, an interactive multiple model (IMM)-based global fusion algorithm is developed to fuse the data from preliminary positioning results and low-cost in-vehicle sensors, such as electronic compasses and wheel speed sensors. In the actual implementation of IMM, the strong tracking extended Kalman filter (STEKF) algorithm is designed to replace the conventional extended Kalman filter (EKF) to achieve model individual filtering. Finally, the proposed strategy is evaluated through experiments. The results validate the feasibility and effectiveness of the proposed strategy. PMID:25490581

  14. A hybrid positioning strategy for vehicles in a tunnel based on RFID and in-vehicle sensors.

    PubMed

    Song, Xiang; Li, Xu; Tang, Wencheng; Zhang, Weigong; Li, Bin

    2014-12-05

    Many intelligent transportation system applications require accurate, reliable, and continuous vehicle positioning. How to achieve such positioning performance in extended GPS-denied environments such as tunnels is the main challenge for land vehicles. This paper proposes a hybrid multi-sensor fusion strategy for vehicle positioning in tunnels. First, the preliminary positioning algorithm is developed. The Radio Frequency Identification (RFID) technology is introduced to achieve preliminary positioning in the tunnel. The received signal strength (RSS) is used as an indicator to calculate the distances between the RFID tags and reader, and then a Least Mean Square (LMS) federated filter is designed to provide the preliminary position information for subsequent global fusion. Further, to improve the positioning performance in the tunnel, an interactive multiple model (IMM)-based global fusion algorithm is developed to fuse the data from preliminary positioning results and low-cost in-vehicle sensors, such as electronic compasses and wheel speed sensors. In the actual implementation of IMM, the strong tracking extended Kalman filter (STEKF) algorithm is designed to replace the conventional extended Kalman filter (EKF) to achieve model individual filtering. Finally, the proposed strategy is evaluated through experiments. The results validate the feasibility and effectiveness of the proposed strategy.

  15. Integrating the ECG power-line interference removal methods with rule-based system.

    PubMed

    Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N

    1995-01-01

    The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.

  16. Adaptive filtering with the self-organizing map: a performance comparison.

    PubMed

    Barreto, Guilherme A; Souza, Luís Gustavo M

    2006-01-01

    In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.

  17. A novel method of language modeling for automatic captioning in TC video teleconferencing.

    PubMed

    Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura

    2007-05-01

    We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.

  18. Active Structural Acoustic Control of Interior Noise on a Raytheon 1900D

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran; Sullivan, Brenda; Cline, John

    2000-01-01

    An active structural acoustic control system has been demonstrated on a Raytheon Aircraft Company 1900D turboprop airliner. Both single frequency and multi-frequency control of the blade passage frequency and its harmonics was accomplished. The control algorithm was a variant of the popular filtered-x LMS implemented in the principal component domain. The control system consisted of 21 inertial actuators and 32 microphones. The actuators were mounted to the aircraft's ring frames. The microphones were distributed uniformly throughout the interior at head height, both seated and standing. Actuator locations were selected using a combinatorial search optimization algorithm. The control system achieved a 14 dB noise reduction of the blade passage frequency during single frequency tests. Multi-frequency control of the first 1st, 2nd and 3rd harmonics resulted in 10.2 dB, 3.3 dB and 1.6 dB noise reductions respectively. These results fall short of the predictions which were produced by the optimization algorithm (13.5 dB, 8.6 dB and 6.3 dB). The optimization was based on actuator transfer functions taken on the ground and it is postulated that cabin pressurization at flight altitude was a factor in this discrepancy.

  19. Experimental and analytical study of secondary path variations in active engine mounts

    NASA Astrophysics Data System (ADS)

    Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus

    2015-03-01

    Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.

  20. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  1. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  2. Low complexity adaptive equalizers for underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Soflaei, Masoumeh; Azmi, Paeiz

    2014-08-01

    Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.

  3. A Phonocardiographic-Based Fiber-Optic Sensor and Adaptive Filtering System for Noninvasive Continuous Fetal Heart Rate Monitoring.

    PubMed

    Martinek, Radek; Nedoma, Jan; Fajkus, Marcel; Kahankova, Radana; Konecny, Jaromir; Janku, Petr; Kepak, Stanislav; Bilik, Petr; Nazeran, Homer

    2017-04-18

    This paper focuses on the design, realization, and verification of a novel phonocardiographic- based fiber-optic sensor and adaptive signal processing system for noninvasive continuous fetal heart rate (fHR) monitoring. Our proposed system utilizes two Mach-Zehnder interferometeric sensors. Based on the analysis of real measurement data, we developed a simplified dynamic model for the generation and distribution of heart sounds throughout the human body. Building on this signal model, we then designed, implemented, and verified our adaptive signal processing system by implementing two stochastic gradient-based algorithms: the Least Mean Square Algorithm (LMS), and the Normalized Least Mean Square (NLMS) Algorithm. With this system we were able to extract the fHR information from high quality fetal phonocardiograms (fPCGs), filtered from abdominal maternal phonocardiograms (mPCGs) by performing fPCG signal peak detection. Common signal processing methods such as linear filtering, signal subtraction, and others could not be used for this purpose as fPCG and mPCG signals share overlapping frequency spectra. The performance of the adaptive system was evaluated by using both qualitative (gynecological studies) and quantitative measures such as: Signal-to-Noise Ratio-SNR, Root Mean Square Error-RMSE, Sensitivity-S+, and Positive Predictive Value-PPV.

  4. A Phonocardiographic-Based Fiber-Optic Sensor and Adaptive Filtering System for Noninvasive Continuous Fetal Heart Rate Monitoring

    PubMed Central

    Martinek, Radek; Nedoma, Jan; Fajkus, Marcel; Kahankova, Radana; Konecny, Jaromir; Janku, Petr; Kepak, Stanislav; Bilik, Petr; Nazeran, Homer

    2017-01-01

    This paper focuses on the design, realization, and verification of a novel phonocardiographic- based fiber-optic sensor and adaptive signal processing system for noninvasive continuous fetal heart rate (fHR) monitoring. Our proposed system utilizes two Mach-Zehnder interferometeric sensors. Based on the analysis of real measurement data, we developed a simplified dynamic model for the generation and distribution of heart sounds throughout the human body. Building on this signal model, we then designed, implemented, and verified our adaptive signal processing system by implementing two stochastic gradient-based algorithms: the Least Mean Square Algorithm (LMS), and the Normalized Least Mean Square (NLMS) Algorithm. With this system we were able to extract the fHR information from high quality fetal phonocardiograms (fPCGs), filtered from abdominal maternal phonocardiograms (mPCGs) by performing fPCG signal peak detection. Common signal processing methods such as linear filtering, signal subtraction, and others could not be used for this purpose as fPCG and mPCG signals share overlapping frequency spectra. The performance of the adaptive system was evaluated by using both qualitative (gynecological studies) and quantitative measures such as: Signal-to-Noise Ratio—SNR, Root Mean Square Error—RMSE, Sensitivity—S+, and Positive Predictive Value—PPV. PMID:28420215

  5. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  6. Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author’s Manuscript)

    DTIC Science & Technology

    2016-02-11

    INVESTIGATION OF BACK-OFF BASED INTERPOLATION BETWEEN RECURRENT NEURAL NETWORK AND N- GRAM LANGUAGE MODELS X. Chen, X. Liu, M. J. F. Gales, and P. C...As the gener- alization patterns of RNNLMs and n- gram LMs are inherently dif- ferent, RNNLMs are usually combined with n- gram LMs via a fixed...RNNLMs and n- gram LMs as n- gram level changes. In order to fully exploit the detailed n- gram level comple- mentary attributes between the two LMs, a

  7. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    NASA Astrophysics Data System (ADS)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  8. Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems

    NASA Astrophysics Data System (ADS)

    He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.

    2018-02-01

    Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.

  9. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  10. Investigation of Diesel’s Residual Noise on Predictive Vehicles Noise Cancelling using LMS Adaptive Algorithm

    NASA Astrophysics Data System (ADS)

    Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida

    2017-04-01

    Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.

  11. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  12. Learning algorithms for human-machine interfaces.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2009-05-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.

  13. Learning Algorithms for Human–Machine Interfaces

    PubMed Central

    Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886

  14. Experimental demonstration of iterative post-equalization algorithm for 37.5-Gbaud PM-16QAM quad-carrier Terabit superchannel.

    PubMed

    Jia, Zhensheng; Chien, Hung-Chang; Cai, Yi; Yu, Jianjun; Zhang, Chengliang; Li, Junjie; Ma, Yiran; Shang, Dongdong; Zhang, Qi; Shi, Sheping; Wang, Huitao

    2015-02-09

    We experimentally demonstrate a quad-carrier 1-Tb/s solution with 37.5-GBaud PM-16QAM signal over 37.5-GHz optical grid at 6.7 b/s/Hz net spectral efficiency. Digital Nyquist pulse shaping at the transmitter and post-equalization at the receiver are employed to mitigate the impairments of joint inter-symbol-interference (ISI) and inter-channel-interference (ICI) symbol degradation. The post-equalization algorithms consist of one sample/symbol based decision-directed least mean square (DD-LMS) adaptive filter, digital post filter and maximum likelihood sequence estimation (MLSE), and a positive iterative process among them. By combining these algorithms, the improvement as much as 4-dB OSNR (0.1nm) at SD-FEC limit (Q(2) = 6.25 corresponding to BER = 2.0e-2) is obtained when compared to no such post-equalization process, and transmission over 820-km EDFA-only standard single-mode fiber (SSMF) link is achieved for two 1.2-Tb/s signals with the averaged Q(2) factor larger than 6.5 dB for all sub-channels. Additionally, 50-GBaud 16QAM operating at 1.28 samples/symbol in a DAC is also investigated and successful transmission over 410-km SSMF link is achieved at 62.5-GHz optical grid.

  15. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  16. Time-domain digital pre-equalization for band-limited signals based on receiver-side adaptive equalizers.

    PubMed

    Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang

    2014-08-25

    We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.

  17. Frequency-domain-independent vector analysis for mode-division multiplexed transmission

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Hu, Guijun; Li, Jiao

    2018-04-01

    In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.

  18. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  19. Underwater single beam circumferentially scanning detection system using range-gated receiver and adaptive filter

    NASA Astrophysics Data System (ADS)

    Tan, Yayun; Zhang, He; Zha, Bingting

    2017-09-01

    Underwater target detection and ranging in seawater are of interest in unmanned underwater vehicles. This study presents an underwater detection system that synchronously scans a collimated laser beam and a narrow field of view to circumferentially detect an underwater target. Hybrid methods of range-gated and variable step-size least mean squares (VSS-LMS) adaptive filter are proposed to suppress water backscattering. The range-gated receiver eliminates the backscattering of near-field water. The VSS-LMS filter extracts the target echo in the remaining backscattering and the constant fraction discriminator timing method is used to improve ranging accuracy. The optimal constant fraction is selected by analysing the jitter noise and slope of the target echo. The prototype of the underwater detection system is constructed and tested in coastal seawater, then the effectiveness of backscattering suppression and high-ranging accuracy is verified through experimental results and analysis discussed in this paper.

  20. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  1. Demultiplexing based on frequency-domain joint decision MMA for MDM system

    NASA Astrophysics Data System (ADS)

    Caili, Gong; Li, Li; Guijun, Hu

    2016-06-01

    In this paper, we propose a demultiplexing method based on frequency-domain joint decision multi-modulus algorithm (FD-JDMMA) for mode division multiplexing (MDM) system. The performance of FD-JDMMA is compared with frequency-domain multi-modulus algorithm (FD-MMA) and frequency-domain least mean square (FD-LMS) algorithm. The simulation results show that FD-JDMMA outperforms FD-MMA in terms of BER and convergence speed in the cases of mQAM (m=4, 16 and 64) formats. And it is also demonstrated that FD-JDMMA achieves better BER performance and converges faster than FD-LMS in the cases of 16QAM and 64QAM. Furthermore, FD-JDMMA maintains similar computational complexity as the both equalization algorithms.

  2. Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control

    NASA Astrophysics Data System (ADS)

    Song, Pucha; Zhao, Haiquan

    2018-07-01

    The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.

  3. An improved AE detection method of rail defect based on multi-level ANC with VSS-LMS

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Cui, Yiming; Wang, Yan; Sun, Mingjian; Hu, Hengshan

    2018-01-01

    In order to ensure the safety and reliability of railway system, Acoustic Emission (AE) method is employed to investigate rail defect detection. However, little attention has been paid to the defect detection at high speed, especially for noise interference suppression. Based on AE technology, this paper presents an improved rail defect detection method by multi-level ANC with VSS-LMS. Multi-level noise cancellation based on SANC and ANC is utilized to eliminate complex noises at high speed, and tongue-shaped curve with index adjustment factor is proposed to enhance the performance of variable step-size algorithm. Defect signals and reference signals are acquired by the rail-wheel test rig. The features of noise signals and defect signals are analyzed for effective detection. The effectiveness of the proposed method is demonstrated by comparing with the previous study, and different filter lengths are investigated to obtain a better noise suppression performance. Meanwhile, the detection ability of the proposed method is verified at the top speed of the test rig. The results clearly illustrate that the proposed method is effective in detecting rail defects at high speed, especially for noise interference suppression.

  4. Preprocessing of PHERMEX flash radiographic images with Haar and adaptive filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brolley, J.E.

    1978-11-01

    Work on image preparation has continued with the application of high-sequency boosting via Haar filtering. This is useful in developing line or edge structures. Widrow LMS adaptive filtering has also been shown to be useful in developing edge structure in special problems. Shadow effects can be obtained with the latter which may be useful for some problems. Combined Haar and adaptive filtering is illustrated for a PHERMEX image.

  5. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  6. Adaptive elimination of optical fiber transmission noise in fiber ocean bottom seismic system

    NASA Astrophysics Data System (ADS)

    Zhong, Qiuwen; Hu, Zhengliang; Cao, Chunyan; Dong, Hongsheng

    2017-10-01

    In this paper, a pressure and acceleration insensitive reference Interferometer is used to obtain laser and public noise introduced by transmission fiber and laser. By using direct subtraction and adaptive filtering, this paper attempts to eliminate and estimation the transmission noise of sensing probe. This paper compares the noise suppression effect of four methods, including the direct subtraction (DS), the least mean square error adaptive elimination (LMS), the normalized least mean square error adaptive elimination (NLMS) and the least square (RLS) adaptive filtering. The experimental results show that the noise reduction effect of RLS and NLMS are almost the same, better than LMS and DS, which can reach 8dB (@100Hz). But considering the workload, RLS is not conducive to the real-time operating system. When it comes to the same treatment effect, the practicability of NLMS is higher than RLS. The noise reduction effect of LMS is slightly worse than that of RLS and NLMS, about 6dB (@100Hz), but its computational complexity is small, which is beneficial to the real time system implementation. It can also be seen that the DS method has the least amount of computational complexity, but the noise suppression effect is worse than that of the adaptive filter due to the difference of the noise amplitude between the RI and the SI, only 4dB (@100Hz) can be reached. The adaptive filter can basically eliminate the influence of the transmission noise, and the simulation signal of the sensor is kept intact.

  7. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  8. Frequency domain FIR and IIR adaptive filters

    NASA Technical Reports Server (NTRS)

    Lynn, D. W.

    1990-01-01

    A discussion of the LMS adaptive filter relating to its convergence characteristics and the problems associated with disparate eigenvalues is presented. This is used to introduce the concept of proportional convergence. An approach is used to analyze the convergence characteristics of block frequency-domain adaptive filters. This leads to a development showing how the frequency-domain FIR adaptive filter is easily modified to provide proportional convergence. These ideas are extended to a block frequency-domain IIR adaptive filter and the idea of proportional convergence is applied. Experimental results illustrating proportional convergence in both FIR and IIR frequency-domain block adaptive filters is presented.

  9. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies.

    PubMed

    Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur

    2018-05-09

    Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.

  10. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  11. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  12. The Development of Learning Management System Using Edmodo

    NASA Astrophysics Data System (ADS)

    Joko; Septia Wulandari, Gayuh

    2018-04-01

    The development of Learning Management System (LMS) can be used as an online learning media by managing the teacher in delivering the material and giving a task. This study aims to: 1) to know the validity of learning devices using LMS with Edmodo, 2) know the student’s response to LMS implementation using Edmodo, and 3) to know the difference of the learning outcome that is students who learned by using LMS with Edmodo and Direct Learning Model (DLM). This research method is quasi experimental by using control group pretest-posttest design. The population of the study was the student at SMKN 1 Sidoarjo. Research sample X TITL 1 class as control goup, and X TITL 2 class as experimental group. The researcher used scale rating to analyze the data validity and students’ respon, and t-test was used to examine the difference of learning outcomes with significant 0.05. The result of the research shows: 1) the average learning device validity use Edmodo 88.14%, lesson plan validity is 92.45%, pretest-posttest validity is 89.15%, learning material validity is 84.64%, and affective and psychomotor-portfolio observation sheets validity is 86.33 included very good criteria or very suitable to be used for research; 2) the result of students’ response questionnaire after taught by using LMS with Edmodo 86.03% in very good category and students agreed that Edmodo can be used in learning; and 3) the learning outcome of LMS by using Edmodo with DLM are: a) there are significant difference of the student cognitive learning outcome which is taught by using Edmodo with the student who use DLM. The average of student learning outcome that is taught LMS using Edmodo is 81.69 compared to student with DLM outcome 76.39, b) there is difference of affective learning outcome that is taught LMS using Edmodo compared to student using DLM. The average of student learning outcomeof affective that is taught LMS by using Edmodo is 83.50 compared students who use DLM 80.34, and c) there is difference of student psychomotor learning outcome that is taught with LMS using Edmodo compared student who use DLM. The average of student learning outcome that is taught with LMS using Edmodo is 85.60 compared to student who uses DLM 82.31.

  13. Generation of Higher Order Modes in a Rectangular Duct

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Cabell, Randolph H.; Brown, Donald E.

    2004-01-01

    Advanced noise control methodologies to reduce sound emission from aircraft engines take advantage of the modal structure of the noise in the duct. This noise is caused by the interaction of rotor wakes with downstream obstructions such as exit guide vanes. Mode synthesis has been accomplished in circular ducts and current active noise control work has made use of this capability to cancel fan noise. The goal of the current effort is to examine the fundamental process of higher order mode propagation through an acoustically treated, curved duct. The duct cross-section is rectangular to permit greater flexibility in representation of a range of duct curvatures. The work presented is the development of a feedforward control system to generate a user-specified modal pattern in the duct. The multiple-error, filtered-x LMS algorithm is used to determine the magnitude and phase of signal input to the loudspeakers to produce a desired modal pattern at a set of error microphones. Implementation issues, including loudspeaker placement and error microphone placement, are discussed. Preliminary results from a 9-3/8 inch by 21 inch duct, using 12 loudspeakers and 24 microphones, are presented. These results demonstrate the ability of the control system to generate a user-specified mode while suppressing undesired modes.

  14. Control of the Low-energy X-rays by Using MCNP5 and Numerical Analysis for a New Concept Intra-oral X-ray Imaging System

    NASA Astrophysics Data System (ADS)

    Huh, Jangyong; Ji, Yunseo; Lee, Rena

    2018-05-01

    An X-ray control algorithm to modulate the X-ray intensity distribution over the FOV (field of view) has been developed by using numerical analysis and MCNP5, a particle transport simulation code on the basis of the Monte Carlo method. X-rays, which are widely used in medical diagnostic imaging, should be controlled in order to maximize the performance of the X-ray imaging system. However, transporting X-rays, like a liquid or a gas is conveyed through a physical form such as pipes, is not possible. In the present study, an X-ray control algorithm and technique to uniformize the Xray intensity projected on the image sensor were developed using a flattening filter and a collimator in order to alleviate the anisotropy of the distribution of X-rays due to intrinsic features of the X-ray generator. The proposed method, which is combined with MCNP5 modeling and numerical analysis, aimed to optimize a flattening filter and a collimator for a uniform distribution of X-rays. Their size and shape were estimated from the method. The simulation and the experimental results both showed that the method yielded an intensity distribution over an X-ray field of 6×4 cm2 at SID (source to image-receptor distance) of 5 cm with a uniformity of more than 90% when the flattening filter and the collimator were mounted on the system. The proposed algorithm and technique are not only confined to flattening filter development but can also be applied for other X-ray related research and development efforts.

  15. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  16. Effects of TiO2 addition on microwave dielectric properties of Li2MgSiO4 ceramics

    NASA Astrophysics Data System (ADS)

    Rose, Aleena; Masin, B.; Sreemoolanadhan, H.; Ashok, K.; Vijayakumar, T.

    2018-03-01

    Silicates have been widely studied for substrate applications in microwave integrated circuits owing to their low dielectric constant and low tangent loss values. Li2MgSiO4 (LMS) ceramics are synthesized through solid-state reaction route using TiO2 as an additive to the pure ceramics. Variations in dielectric properties of LMS upon TiO2 addition in different weight percentages (0.5, 1.5, 2) are studied by keeping the sintering parameters constant. Crystalline structure, phase composition, and microstructure of LMS and LMS-TiO2 ceramics were studied using x-ray diffraction spectrometer and High Resolution Scanning electron microscope. Density was measured through Archimedes method and the microwave dielectric properties were examined by Cavity perturbation technique. LMS achieved relative permittivity (ε r) of 5.73 and dielectric loss (tan δ) of 5.897 × 10‑4 at 8 GHz. In LMS-TiO2 ceramics, 0.5 wt% TiO2 added LMS showed comparatively better dielectric properties than other weight percentages where ε r = 5.67, tan δ = 7.737 × 10‑4 at 8 GHz.

  17. A CCD Monolithic LMS Adaptive Analog Signal Processor Integrated Circuit.

    DTIC Science & Technology

    1980-03-01

    adaptive filter with electrically- reprogrammable MOS analog conductance weights. I The analog and digital peripheral MOS on-chip circuits are provided with...electrically reprogrammable analog weights at tap positions along a CCD analog delay line in order to form a basic linear combiner for adaptive filtering...electrically reprogrammable analog conductance weights was introduced with the use of non-volatile MNOS memory 6-7 transistors biased in their triode

  18. Active Engine Mount Technology for Automobiles

    NASA Technical Reports Server (NTRS)

    Rahman, Z.; Spanos, J.

    1996-01-01

    We present a narrow-band tracking control using a variant of the Least Mean Square (LMS) algorithm [1,2,3] for supressing automobile engine/drive-train vibration disturbances. The algorithm presented here has a simple structure and may be implemented in a low cost micro controller.

  19. Wave field synthesis, adaptive wave field synthesis and ambisonics using decentralized transformed control: Potential applications to sound field reproduction and active noise control

    NASA Astrophysics Data System (ADS)

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-09-01

    Sound field reproduction finds applications in listening to prerecorded music or in synthesizing virtual acoustics. The objective is to recreate a sound field in a listening environment. Wave field synthesis (WFS) is a known open-loop technology which assumes that the reproduction environment is anechoic. Classical WFS, therefore, does not perform well in a real reproduction space such as room. Previous work has suggested that it is physically possible to reproduce a progressive wave field in-room situation using active control approaches. In this paper, a formulation of adaptive wave field synthesis (AWFS) introduces practical possibilities for an adaptive sound field reproduction combining WFS and active control (with WFS departure penalization) with a limited number of error sensors. AWFS includes WFS and closed-loop ``Ambisonics'' as limiting cases. This leads to the modification of the multichannel filtered-reference least-mean-square (FXLMS) and the filtered-error LMS (FELMS) adaptive algorithms for AWFS. Decentralization of AWFS for sound field reproduction is introduced on the basis of sources' and sensors' radiation modes. Such decoupling may lead to decentralized control of source strength distributions and may reduce computational burden of the FXLMS and the FELMS algorithms used for AWFS. [Work funded by NSERC, NATEQ, Université de Sherbrooke and VRQ.] Ultrasound/Bioresponse to

  20. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  1. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  2. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  3. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  4. Experiments on active isolation using distributed PVDF error sensors

    NASA Technical Reports Server (NTRS)

    Lefebvre, S.; Guigou, C.; Fuller, C. R.

    1992-01-01

    A control system based on a two-channel narrow-band LMS algorithm is used to isolate periodic vibration at low frequencies on a structure composed of a rigid top plate mounted on a flexible receiving plate. The control performance of distributed PVDF error sensors and accelerometer point sensors is compared. For both sensors, high levels of global reduction, up to 32 dB, have been obtained. It is found that, by driving the PVDF strip output voltage to zero, the controller may force the structure to vibrate so that the integration of the strain under the length of the PVDF strip is zero. This ability of the PVDF sensors to act as spatial filters is especially relevant in active control of sound radiation. It is concluded that the PVDF sensors are flexible, nonfragile, and inexpensive and can be used as strain sensors for active control applications of vibration isolation and sound radiation.

  5. OH/H2O Detection Capability Evaluation on Chang'e-5 Lunar Mineralogical Spectrometer (LMS)

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Ren, Xin; Liu, Jianjun; Li, Chunlai; Mu, Lingli; Deng, Liyan

    2016-10-01

    The Chang'e-5 (CE-5) lunar sample return mission is scheduled to launch in 2017 to bring back lunar regolith and drill samples. The Chang'e-5 Lunar Mineralogical Spectrometer (LMS), as one of the three sets of scientific payload installed on the lander, is used to collect in-situ spectrum and analyze the mineralogical composition of the samplingsite. It can also help to select the sampling site, and to compare the measured laboratory spectrum of returned sample with in-situ data. LMS employs acousto-optic tunable filters (AOTFs) and is composed of a VIS/NIR module (0.48μm-1.45μm) and an IR module (1.4μm -3.2μm). It has spectral resolution ranging from 3 to 25 nm, with a field of view (FOV) of 4.24°×4.24°. Unlike Chang'e-3 VIS/NIR Imaging Spectrometer (VNIS), the spectral coverage of LMS is extended from 2.4μm to 3.2μm, which has capability to identify H2O/OH absorption features around 2.7μm. An aluminum plate and an Infragold plate are fixed in the dust cover, being used as calibration targets in the VIS/NIR and IR spectral range respectively when the dust cover is open. Before launch, a ground verification test of LMS needs to be conducted in order to: 1) test and verify the detection capability of LMS through evaluation on the quality of image and spectral data collected for the simulated lunar samples; and 2) evaluate the accuracy of data processing methods by the simulation of instrument working on the moon. The ground verification test will be conducted both in the lab and field. The spectra of simulated lunar regolith/mineral samples will be collected simultaneously by the LMS and two calibrated spectrometers: a FTIR spectrometer (Model 102F) and an ASD FieldSpec 4 Hi-Res spectrometer. In this study, the results of the LMS ground verification test will be reported, and OH/H2O Detection Capability will be evaluated especially.

  6. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    ERIC Educational Resources Information Center

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  7. Light-driven liquid metal nanotransformers for biomedical theranostics

    NASA Astrophysics Data System (ADS)

    Chechetka, Svetlana A.; Yu, Yue; Zhen, Xu; Pramanik, Manojit; Pu, Kanyi; Miyako, Eijiro

    2017-05-01

    Room temperature liquid metals (LMs) represent a class of emerging multifunctional materials with attractive novel properties. Here, we show that photopolymerized LMs present a unique nanoscale capsule structure characterized by high water dispersibility and low toxicity. We also demonstrate that the LM nanocapsule generates heat and reactive oxygen species under biologically neutral near-infrared (NIR) laser irradiation. Concomitantly, NIR laser exposure induces a transformation in LM shape, destruction of the nanocapsules, contactless controlled release of the loaded drugs, optical manipulations of a microfluidic blood vessel model and spatiotemporal targeted marking for X-ray-enhanced imaging in biological organs and a living mouse. By exploiting the physicochemical properties of LMs, we achieve effective cancer cell elimination and control of intercellular calcium ion flux. In addition, LMs display a photoacoustic effect in living animals during NIR laser treatment, making this system a powerful tool for bioimaging.

  8. Light-driven liquid metal nanotransformers for biomedical theranostics

    PubMed Central

    Chechetka, Svetlana A.; Yu, Yue; Zhen, Xu; Pramanik, Manojit; Pu, Kanyi; Miyako, Eijiro

    2017-01-01

    Room temperature liquid metals (LMs) represent a class of emerging multifunctional materials with attractive novel properties. Here, we show that photopolymerized LMs present a unique nanoscale capsule structure characterized by high water dispersibility and low toxicity. We also demonstrate that the LM nanocapsule generates heat and reactive oxygen species under biologically neutral near-infrared (NIR) laser irradiation. Concomitantly, NIR laser exposure induces a transformation in LM shape, destruction of the nanocapsules, contactless controlled release of the loaded drugs, optical manipulations of a microfluidic blood vessel model and spatiotemporal targeted marking for X-ray-enhanced imaging in biological organs and a living mouse. By exploiting the physicochemical properties of LMs, we achieve effective cancer cell elimination and control of intercellular calcium ion flux. In addition, LMs display a photoacoustic effect in living animals during NIR laser treatment, making this system a powerful tool for bioimaging. PMID:28561016

  9. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  10. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    NASA Astrophysics Data System (ADS)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  11. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  12. Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  13. Motion artifact removal algorithm by ICA for e-bra: a women ECG measurement system

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Varadan, Vijay K.

    2013-04-01

    Wearable ECG(ElectroCardioGram) measurement systems have increasingly been developing for people who suffer from CVD(CardioVascular Disease) and have very active lifestyles. Especially, in the case of female CVD patients, several abnormal CVD symptoms are accompanied with CVDs. Therefore, monitoring women's ECG signal is a significant diagnostic method to prevent from sudden heart attack. The E-bra ECG measurement system from our previous work provides more convenient option for women than Holter monitor system. The e-bra system was developed with a motion artifact removal algorithm by using an adaptive filter with LMS(least mean square) and a wandering noise baseline detection algorithm. In this paper, ICA(independent component analysis) algorithms are suggested to remove motion artifact factor for the e-bra system. Firstly, the ICA algorithms are developed with two kinds of statistical theories: Kurtosis, Endropy and evaluated by performing simulations with a ECG signal created by sgolayfilt function of MATLAB, a noise signal including 0.4Hz, 1.1Hz and 1.9Hz, and a weighed vector W estimated by kurtosis or entropy. A correlation value is shown as the degree of similarity between the created ECG signal and the estimated new ECG signal. In the real time E-Bra system, two pseudo signals are extracted by multiplying with a random weighted vector W, the measured ECG signal from E-bra system, and the noise component signal by noise extraction algorithm from our previous work. The suggested ICA algorithm basing on kurtosis or entropy is used to estimate the new ECG signal Y without noise component.

  14. An Ensemble Approach in Converging Contents of LMS and KMS

    ERIC Educational Resources Information Center

    Sabitha, A. Sai; Mehrotra, Deepti; Bansal, Abhay

    2017-01-01

    Currently the challenges in e-Learning are converging the learning content from various sources and managing them within e-learning practices. Data mining learning algorithms can be used and the contents can be converged based on the Metadata of the objects. Ensemble methods use multiple learning algorithms and it can be used to converge the…

  15. Protein Crystal Movements and Fluid Flows During Microgravity Growth

    NASA Technical Reports Server (NTRS)

    Boggon, Titus J.; Chayen, Naomi E.; Snell, Edward H.; Dong, Jun; Lautenschlager, Peter; Potthast, Lothar; Siddons, D. Peter; Stojanoff, Vivian; Gordon, Elspeth; Thompson, Andrew W.; hide

    1998-01-01

    The growth of protein crystals suitable for x-ray crystal structure analysis is an important topic. The quality (perfection) of protein crystals is now being evaluated by mosaicity analysis (rocking curves) and x-ray topographic images as well as the diffraction resolution limit and overall data quality. In yet another study, use of hanging drop vapour diffusion geometry on the IML-2 shuttle mission showed, again via CCD video monitoring, growing apocrustacyanin C(sub 1) protein crystal executing near cyclic movement, reminiscent of Marangoni convection flow of fluid, the crystals serving as "markers" of the fluid flow. A review is given here of existing results and experience over several microgravity missions. Some comment is given on gel protein crystal growth in attempts to 'mimic' the benefits of microgravity on Earth. Finally, the recent new results from our experiments on the shuttle mission LMS are described. These results include CCD video as well as interferometry during the mission, followed, on return to Earth, by reciprocal space mapping at the NSLS, Brookhaven, and full X-ray data collection on LMS and Earth control lysozyme crystals. Diffraction data recorded from LMS and ground control apocrustacyanin C(sub 1) crystals are also described.

  16. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  17. Optical Absorption Spectra of Nuclear Filters Modified by Deposition of Silver Nano- and Microparticles

    NASA Astrophysics Data System (ADS)

    Smolyanskii, A. S.; Kozlova, N. V.; Zheltova, A. V.; Aksyutina, A. S.; Shvedov, A. S.; Lakeev, S. G.

    2015-07-01

    Light scattering and interference patterns are studied in the optical absorption spectra of nuclear filters based on polyethylene terephthalate fi lms modifi ed by dry aerosol deposition of silver nano- and microparticles. Surface plasmon polaritons and localized plasmons formed by the passage of light through porous silver films are found to have an effect on the diffraction and interference modes. The thickness of silver nano- and microparticle coatings on the surface of the nuclear fi lters was determined from the shift in the interference patterns in the optical absorption spectra of the modified nuclear filters relative to the original nuclear filters. A correlation was found between the estimated coating thickness and the average surface roughness of the nuclear filters modified by layers of silver nano- and microparticles.

  18. Effect of the passive recovery period on the lactate minimum speed in sprinters and endurance runners.

    PubMed

    Denadai, B S; Higino, W P

    2004-12-01

    The objective of this study was to verify the effect of the passive recovery time following a supramaximal sprint exercise and the incremental exercise test on the lactate minimum speed (LMS). Thirteen sprinters and 12 endurance runners performed the following tests: (1) a maximal 500 m sprint followed by a passive recovery to determine the time to reach the peak blood lactate concentration; (2) after the maximal 500 m sprint, the athletes rested eight mins, and then performed 6 x 800 m incremental test, in order to determine the speed corresponding to the lower blood lactate concentration (LMS1) and; (3) identical procedures of the LMS1, differing only in the passive rest time, that was performed in accordance with the time to peak lactate (LMS2). The time (min) to reach the peak blood lactate concentration was significantly higher in the sprinters (12.76 +/- 2.83) than in the endurance runners (10.25 +/- 3.01). There was no significant difference between LMS 1 and LMS2, for both endurance (285.7 +/- 19.9; 283.9 +/- 17.8 m/min; r = 0.96) and sprint runners (238.0 +/- 14.1; 239.4 +/- 13.9 m/min; r = 0.93), respectively. We can conclude that the LMS is not influenced by a passive recovery period longer than eight mins (adjusted according with the time to peak blood lactate), although blood lactate concentration may differ at this speed. The predominant type of training (aerobic or anaerobic) of the athletes does not seem to influence the phenomenon previously described.

  19. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  20. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1994-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  1. Satellite Angular Rate Estimation From Vector Measurements

    NASA Technical Reports Server (NTRS)

    Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.

  2. Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: a review

    PubMed Central

    Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong

    2017-01-01

    Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644

  3. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  4. Ionospheric gravity wave measurements with the USU dynasonde

    NASA Technical Reports Server (NTRS)

    Berkey, Frank T.; Deng, Jun Yuan

    1992-01-01

    A method for the measurement of ionospheric Gravity Wave (GW) using the USU Dynasonde is outlined. This method consists of a series of individual procedures, which includes functions for data acquisition, adaptive scaling, polarization discrimination, interpolation and extrapolation, digital filtering, windowing, spectrum analysis, GW detection, and graphics display. Concepts of system theory are applied to treat the ionosphere as a system. An adaptive ionogram scaling method was developed for automatically extracting ionogram echo traces from noisy raw sounding data. The method uses the well known Least Mean Square (LMS) algorithm to form a stochastic optimal estimate of the echo trace which is then used to control a moving window. The window tracks the echo trace, simultaneously eliminating the noise and interference. Experimental results show that the proposed method functions as designed. Case studies which extract GW from ionosonde measurements were carried out using the techniques described. Geophysically significant events were detected and the resultant processed results are illustrated graphically. This method was also developed for real time implementation in mind.

  5. Mathematical filtering minimizes metallic halation of titanium implants in MicroCT images.

    PubMed

    Ha, Jee; Osher, Stanley J; Nishimura, Ichiro

    2013-01-01

    Microcomputed tomography (MicroCT) images containing titanium implant suffer from x-rays scattering, artifact and the implant surface is critically affected by metallic halation. To improve the metallic halation artifact, a nonlinear Total Variation denoising algorithm such as Split Bregman algorithm was applied to the digital data set of MicroCT images. This study demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.

  6. Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection

    NASA Astrophysics Data System (ADS)

    Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan

    2018-03-01

    X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400  ×  400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.

  7. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  8. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  9. Method and apparatus for digitally based high speed x-ray spectrometer

    DOEpatents

    Warburton, W.K.; Hubbard, B.

    1997-11-04

    A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ``hardwired`` processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs.

  10. Method and apparatus for digitally based high speed x-ray spectrometer

    DOEpatents

    Warburton, William K.; Hubbard, Bradley

    1997-01-01

    A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a "hardwired" processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer.

  11. Robust Battery Fuel Gauge Algorithm Development, Part 3: State of Charge Tracking

    DTIC Science & Technology

    2014-10-19

    X. Zhang, F. Sun, and J. Fan, “State-of-charge estimation of the lithium - ion battery using an adaptive extended kalman filter based on an improved...framework with ex- tended kalman filter for lithium - ion battery soc and capacity estimation,” Applied Energy, vol. 92, pp. 694–704, 2012. [16] X. Hu, F...Sun, and Y. Zou, “Estimation of state of charge of a lithium - ion battery pack for electric vehicles using an adaptive luenberger observer,” Energies

  12. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  13. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  14. Fetal Electrocardiogram Extraction and Analysis Using Adaptive Noise Cancellation and Wavelet Transformation Techniques.

    PubMed

    Sutha, P; Jayanthi, V E

    2017-12-08

    Birth defect-related demise is mainly due to congenital heart defects. In the earlier stage of pregnancy, fetus problem can be identified by finding information about the fetus to avoid stillbirths. The gold standard used to monitor the health status of the fetus is by Cardiotachography(CTG), cannot be used for long durations and continuous monitoring. There is a need for continuous and long duration monitoring of fetal ECG signals to study the progressive health status of the fetus using portable devices. The non-invasive method of electrocardiogram recording is one of the best method used to diagnose fetal cardiac problem rather than the invasive methods.The monitoring of the fECG requires development of a miniaturized hardware and a efficient signal processing algorithms to extract the fECG embedded in the mother ECG. The paper discusses a prototype hardware developed to monitor and record the raw mother ECG signal containing the fECG and a signal processing algorithm to extract the fetal Electro Cardiogram signal. We have proposed two methods of signal processing, first is based on the Least Mean Square (LMS) Adaptive Noise Cancellation technique and the other method is based on the Wavelet Transformation technique. A prototype hardware was designed and developed to acquire the raw ECG signal containing the mother and fetal ECG and the signal processing techniques were used to eliminate the noises and extract the fetal ECG and the fetal Heart Rate Variability was studied. Both the methods were evaluated with the signal acquired from a fetal ECG simulator, from the Physionet database and that acquired from the subject. Both the methods are evaluated by finding heart rate and its variability, amplitude spectrum and mean value of extracted fetal ECG. Also the accuracy, sensitivity and positive predictive value are also determined for fetal QRS detection technique. In this paper adaptive filtering technique uses Sign-sign LMS algorithm and wavelet techniques with Daubechies wavelet, employed along with de noising techniques for the extraction of fetal Electrocardiogram.Both the methods are having good sensitivity and accuracy. In adaptive method the sensitivity is 96.83, accuracy 89.87, wavelet sensitivity is 95.97 and accuracy is 88.5. Additionally, time domain parameters from the plot of heart rate variability of mother and fetus are analyzed.

  15. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  16. Vacancy-mediated fcc/bcc phase separation in Fe 1-xNi x ultrathin films

    DOE PAGES

    Mentes, T. O.; Stojic, N.; Vescovo, E.; ...

    2016-08-01

    The phase separation occurring in Fe-Ni thin lms near the Invar composition is studied by using high resolution spectromicroscopy techniques and density functional theory calculations. Annealed at temperatures around 300 C, Fe 0.70Ni 0.30 lms on W(110) break into micron-sized bcc and fcc domains with compositions in agreement with the bulk Fe-Ni phase diagram. Ni is found to be the di using species in forming the chemical heterogeneity. The experimentally-determined energy barrier of 1.59 0.09 eV is identi ed as the vacancy formation energy via density functional theory calculations. Thus, the principal role of the surface in the phase separationmore » process is attributed to vacancy creation without interstitials.« less

  17. Comparison of different numerical treatments for x-ray phase tomography of soft tissue from differential phase projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelliccia, Daniele; Vaz, Raquel; Svalbe, Imants

    X-ray imaging of soft tissue is made difficult by their low absorbance. The use of x-ray phase imaging and tomography can significantly enhance the detection of these tissues and several approaches have been proposed to this end. Methods such as analyzer-based imaging or grating interferometry produce differential phase projections that can be used to reconstruct the 3D distribution of the sample refractive index. We report on the quantitative comparison of three different methods to obtain x-ray phase tomography with filtered back-projection from differential phase projections in the presence of noise. The three procedures represent different numerical approaches to solve themore » same mathematical problem, namely phase retrieval and filtered back-projection. It is found that obtaining individual phase projections and subsequently applying a conventional filtered back-projection algorithm produces the best results for noisy experimental data, when compared with other procedures based on the Hilbert transform. The algorithms are tested on simulated phantom data with added noise and the predictions are confirmed by experimental data acquired using a grating interferometer. The experiment is performed on unstained adult zebrafish, an important model organism for biomedical studies. The method optimization described here allows resolution of weak soft tissue features, such as muscle fibers.« less

  18. Statistical Analysis of the LMS and Modified Stochastic Gradient Algorithms

    DTIC Science & Technology

    1989-05-14

    siloare of the input data and incorporated directly Into recurisive descriptions and/or nonuniform weighted mov- the altorithmn ar, a data-dependent time...houlsotaion- are al.~, ii%ed tito rteod the welitht tranotienitwbhav- These results are a measure of how rapidly the algo- lair . The Mrnuation,; aalggeut

  19. Advanced Nonlinear Latent Variable Modeling: Distribution Analytic LMS and QML Estimators of Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Kelava, Augustin; Werner, Christina S.; Schermelleh-Engel, Karin; Moosbrugger, Helfried; Zapf, Dieter; Ma, Yue; Cham, Heining; Aiken, Leona S.; West, Stephen G.

    2011-01-01

    Interaction and quadratic effects in latent variable models have to date only rarely been tested in practice. Traditional product indicator approaches need to create product indicators (e.g., x[superscript 2] [subscript 1], x[subscript 1]x[subscript 4]) to serve as indicators of each nonlinear latent construct. These approaches require the use of…

  20. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  1. Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8

    NASA Astrophysics Data System (ADS)

    Joshi, P.

    2015-12-01

    Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.

  2. Documentation for subroutine REDUC3, an algorithm for the linear filtering of gridded magnetic data

    USGS Publications Warehouse

    Blakely, Richard J.

    1977-01-01

    Subroutine REDUC3 transforms a total field anomaly h1(x,y) , measured on a horizontal and rectangular grid, into a new anomaly h2(x,y). This new anomaly is produced by the same source as h1(x,y) , but (1) is observed at a different elevation, (2) has a source with a different direction of magnetization, and/or (3) has a different direction of residual field. Case 1 is tantamount to upward or downward continuation. Cases 2 and 3 are 'reduction to the pole', if the new inclinations of both the magnetization and regional field are 90 degrees. REDUC3 is a filtering operation applied in the wave-number domain. It first Fourier transforms h1(x,y) , multiplies by the appropriate filter, and inverse Fourier transforms the result to obtain h2(x,y). No assumptions are required about the shape of the source or how the intensity of magnetization varies within it.

  3. Implementation Issues of Adaptive Energy Detection in Heterogeneous Wireless Networks

    PubMed Central

    Sobron, Iker; Eizmendi, Iñaki; Martins, Wallace A.; Diniz, Paulo S. R.; Ordiales, Juan Luis; Velez, Manuel

    2017-01-01

    Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying the use of ED-based variants. Assessing ED algorithms from theoretical and simulation viewpoints relies on several assumptions and simplifications which, eventually, lead to conclusions that do not necessarily meet the requirements imposed by real propagation environments. This work addresses those problems by dealing with practical implementation issues of adaptive least mean square (LMS)-based ED algorithms. The paper proposes a new adaptive ED algorithm that uses a variable step-size guaranteeing the LMS convergence in time-varying environments. Several implementation guidelines are provided and, additionally, an empirical assessment and validation with a software defined radio-based hardware is carried out. Experimental results show good performance in terms of probabilities of detection (Pd>0.9) and false alarm (Pf∼0.05) in a range of low signal-to-noise ratios around [-4,1] dB, in both single-node and cooperative modes. The proposed sensing methodology enables a seamless monitoring of the radio electromagnetic spectrum in order to provide band occupancy information for an efficient usage among several wireless communications systems. PMID:28441751

  4. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  5. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  6. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    NASA Astrophysics Data System (ADS)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  7. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacificmore » Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing techniques. Bootstrap techniques have been developed to estimate confidence intervals for the electromechanical modes from field measured data. Results were obtained using injected signal data provided by BPA. A new probing signal was designed that puts more strength into the signal for a given maximum peak to peak swing. Further simulations were conducted on a model based on measured data and with the modifications of the 19-machine simulation model. Montana Tech researchers participated in two primary activities: (1) continued development of the 19-machine simulation test system to include a DC line; and (2) extensive simulation analysis of the various system identification algorithms and bootstrap techniques using the 19 machine model. Researchers at the University of Alaska-Fairbanks focused on the development and testing of adaptive filter algorithms for mode estimation using data generated from simulation models and on data provided in collaboration with BPA and PNNL. There efforts consist of pre-processing field data, testing and refining adaptive filter techniques (specifically the Least Mean Squares (LMS), the Adaptive Step-size LMS (ASLMS), and Error Tracking (ET) algorithms). They also improved convergence of the adaptive algorithms by using an initial estimate from block processing AR method to initialize the weight vector for LMS. Extensive testing was performed on simulated data from the 19 machine model. This project was also extensively involved in the WECC (Western Electricity Coordinating Council) system wide tests carried out in 2005 and 2006. These tests involved injecting known probing signals into the western power grid. One of the primary goals of these tests was the reliable estimation of electromechanical mode properties from measured PMU data. Applied to the system were three types of probing inputs: (1) activation of the Chief Joseph Dynamic Brake, (2) mid-level probing at the Pacific DC Intertie (PDCI), and (3) low-level probing on the PDCI. The Chief Joseph Dynamic Brake is a 1400 MW disturbance to the system and is injected for a half of a second. For the mid and low-level probing, the Celilo terminal of the PDCI is modulated with a known probing signal. Similar but less extensive tests were conducted in June of 2000. The low-level probing signals were designed at the University of Wyoming. A number of important design factors are considered. The designed low-level probing signal used in the tests is a multi-sine signal. Its frequency content is focused in the range of the inter-area electromechanical modes. The most frequently used of these low-level multi-sine signals had a period of over two minutes, a root-mean-square (rms) value of 14 MW, and a peak magnitude of 20 MW. Up to 15 cycles of this probing signal were injected into the system resulting in a processing gain of 15. The resulting measured response at points throughout the system was not much larger than the ambient noise present in the measurements.« less

  9. Enhanced performance of visible light communication employing 512-QAM N-SC-FDE and DD-LMS.

    PubMed

    Wang, Yuanquan; Huang, Xingxing; Zhang, Junwen; Wang, Yiguang; Chi, Nan

    2014-06-30

    In this paper, a novel hybrid time-frequency adaptive equalization algorithm based on a combination of frequency domain equalization (FDE) and decision-directed least mean square (DD-LMS) is proposed and experimentally demonstrated in a Nyquist single carrier visible light communication (VLC) system. Adopting this scheme, as well with 512-ary quadrature amplitude modulation (512-QAM) and wavelength multiplexing division (WDM), an aggregate data rate of 4.22-Gb/s is successfully achieved employing a single commercially available red-green-blue (RGB) light emitting diode (LED) with low bandwidth. The measured Q-factors for 3 wavelength channels are all above the Q-limit. To the best of our knowledge, this is the highest data rate ever achieved by employing a commercially available RGB-LED.

  10. Edge enhancement algorithm for low-dose X-ray fluoroscopic imaging.

    PubMed

    Lee, Min Seok; Park, Chul Hee; Kang, Moon Gi

    2017-12-01

    Low-dose X-ray fluoroscopy has continually evolved to reduce radiation risk to patients during clinical diagnosis and surgery. However, the reduction in dose exposure causes quality degradation of the acquired images. In general, an X-ray device has a time-average pre-processor to remove the generated quantum noise. However, this pre-processor causes blurring and artifacts within the moving edge regions, and noise remains in the image. During high-pass filtering (HPF) to enhance edge detail, this noise in the image is amplified. In this study, a 2D edge enhancement algorithm comprising region adaptive HPF with the transient improvement (TI) method, as well as artifacts and noise reduction (ANR), was developed for degraded X-ray fluoroscopic images. The proposed method was applied in a static scene pre-processed by a low-dose X-ray fluoroscopy device. First, the sharpness of the X-ray image was improved using region adaptive HPF with the TI method, which facilitates sharpening of edge details without overshoot problems. Then, an ANR filter that uses an edge directional kernel was developed to remove the artifacts and noise that can occur during sharpening, while preserving edge details. The quantitative and qualitative results obtained by applying the developed method to low-dose X-ray fluoroscopic images and visually and numerically comparing the final images with images improved using conventional edge enhancement techniques indicate that the proposed method outperforms existing edge enhancement methods in terms of objective criteria and subjective visual perception of the actual X-ray fluoroscopic image. The developed edge enhancement algorithm performed well when applied to actual low-dose X-ray fluoroscopic images, not only by improving the sharpness, but also by removing artifacts and noise, including overshoot. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  12. An LMS Programming Scheme and Floating-Gate Technology Enabled Trimmer-Less and Low Voltage Flame Detection Sensor.

    PubMed

    Iglesias-Rojas, Juan Carlos; Gomez-Castañeda, Felipe; Moreno-Cadenas, Jose Antonio

    2017-06-14

    In this paper, a Least Mean Square (LMS) programming scheme is used to set the offset voltage of two operational amplifiers that were built using floating-gate transistors, enabling a 0.95 V RMS trimmer-less flame detection sensor. The programming scheme is capable of setting the offset voltage over a wide range of values by means of electron injection. The flame detection sensor consists of two programmable offset operational amplifiers; the first amplifier serves as a 26 μV offset voltage follower, whereas the second amplifier acts as a programmable trimmer-less voltage comparator. Both amplifiers form the proposed sensor, whose principle of functionality is based on the detection of the electrical changes produced by the flame ionization. The experimental results show that it is possible to measure the presence of a flame accurately after programming the amplifiers with a maximum of 35 LMS-algorithm iterations. Current commercial flame detectors are mainly used in absorption refrigerators and large industrial gas heaters, where a high voltage AC source and several mechanical trimmings are used in order to accurately measure the presence of the flame.

  13. An LMS Programming Scheme and Floating-Gate Technology Enabled Trimmer-Less and Low Voltage Flame Detection Sensor

    PubMed Central

    Iglesias-Rojas, Juan Carlos; Gomez-Castañeda, Felipe; Moreno-Cadenas, Jose Antonio

    2017-01-01

    In this paper, a Least Mean Square (LMS) programming scheme is used to set the offset voltage of two operational amplifiers that were built using floating-gate transistors, enabling a 0.95 VRMS trimmer-less flame detection sensor. The programming scheme is capable of setting the offset voltage over a wide range of values by means of electron injection. The flame detection sensor consists of two programmable offset operational amplifiers; the first amplifier serves as a 26 μV offset voltage follower, whereas the second amplifier acts as a programmable trimmer-less voltage comparator. Both amplifiers form the proposed sensor, whose principle of functionality is based on the detection of the electrical changes produced by the flame ionization. The experimental results show that it is possible to measure the presence of a flame accurately after programming the amplifiers with a maximum of 35 LMS-algorithm iterations. Current commercial flame detectors are mainly used in absorption refrigerators and large industrial gas heaters, where a high voltage AC source and several mechanical trimmings are used in order to accurately measure the presence of the flame. PMID:28613250

  14. A new event detector designed for the Seismic Research Observatories

    USGS Publications Warehouse

    Murdock, James N.; Hutt, Charles R.

    1983-01-01

    A new short-period event detector has been implemented on the Seismic Research Observatories. For each signal detected, a printed output gives estimates of the time of onset of the signal, direction of the first break, quality of onset, period and maximum amplitude of the signal, and an estimate of the variability of the background noise. On the SRO system, the new algorithm runs ~2.5x faster than the former (power level) detector. This increase in speed is due to the design of the algorithm: all operations can be performed by simple shifts, additions, and comparisons (floating point operations are not required). Even though a narrow-band recursive filter is not used, the algorithm appears to detect events competitively with those algorithms that employ such filters. Tests at Albuquerque Seismological Laboratory on data supplied by Blandford suggest performance commensurate with the on-line detector of the Seismic Data Analysis Center, Alexandria, Virginia.

  15. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  16. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  17. Close-form expression of one-tap normalized LMS carrier phase recovery in optical communication systems

    NASA Astrophysics Data System (ADS)

    Xu, Tianhua; Jacobsen, Gunnar; Popov, Sergei; Li, Jie; Liu, Tiegen; Zhang, Yimo

    2016-10-01

    The performance of long-haul high speed coherent optical fiber communication systems is significantly degraded by the laser phase noise and the equalization enhanced phase noise (EEPN). In this paper, the analysis of the one-tap normalized least-mean-square (LMS) carrier phase recovery (CPR) is carried out and the close-form expression is investigated for quadrature phase shift keying (QPSK) coherent optical fiber communication systems, in compensating both laser phase noise and equalization enhanced phase noise. Numerical simulations have also been implemented to verify the theoretical analysis. It is found that the one-tap normalized least-mean-square algorithm gives the same analytical expression for predicting CPR bit-error-rate (BER) floors as the traditional differential carrier phase recovery, when both the laser phase noise and the equalization enhanced phase noise are taken into account.

  18. Topological Insulators and Superconductors for Innovative Devices

    DTIC Science & Technology

    2015-03-20

    bulk-sensitive experiment with hard x ray or low-energy photons.) This demon- strates that the bulk band gap can be enhanced by taking advantage of the...crystallinity in X - ray Laue analysis, and their detailed transport properties are described in the Supplementary Information. ARPES measurements were...high quality of our fi lms grown at high temperatures, including ultrathin ones, is evident from the X - ray diffraction patterns shown in Figure 2 d

  19. Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-04-01

    When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.

  20. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  1. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  2. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  3. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  4. A multichannel nonlinear adaptive noise canceller based on generalized FLANN for fetal ECG extraction

    NASA Astrophysics Data System (ADS)

    Ma, Yaping; Xiao, Yegui; Wei, Guo; Sun, Jinwei

    2016-01-01

    In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TSpca) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times more computationally expensive than the GFLANN, which makes the GFLANN a good alternative for NI-FECG extraction.

  5. Distant Cluster Hunting. II; A Comparison of X-Ray and Optical Cluster Detection Techniques and Catalogs from the ROSAT Optical X-Ray Survey

    NASA Technical Reports Server (NTRS)

    Donahue, Megan; Scharf, Caleb A.; Mack, Jennifer; Lee, Y. Paul; Postman, Marc; Rosait, Piero; Dickinson, Mark; Voit, G. Mark; Stocke, John T.

    2002-01-01

    We present and analyze the optical and X-ray catalogs of moderate-redshift cluster candidates from the ROSA TOptical X-Ray Survey, or ROXS. The survey covers the sky area contained in the fields of view of 23 deep archival ROSA T PSPC pointings, 4.8 square degrees. The cross-correlated cluster catalogs were con- structed by comparing two independent catalogs extracted from the optical and X-ray bandpasses, using a matched-filter technique for the optical data and a wavelet technique for the X-ray data. We cross-identified cluster candidates in each catalog. As reported in Paper 1, the matched-filter technique found optical counter- parts for at least 60% (26 out of 43) of the X-ray cluster candidates; the estimated redshifts from the matched filter algorithm agree with at least 7 of 1 1 spectroscopic confirmations (Az 5 0.10). The matched filter technique. with an imaging sensitivity of ml N 23, identified approximately 3 times the number of candidates (155 candidates, 142 with a detection confidence >3 u) found in the X-ray survey of nearly the same area. There are 57 X-ray candidates, 43 of which are unobscured by scattered light or bright stars in the optical images. Twenty-six of these have fairly secure optical counterparts. We find that the matched filter algorithm, when applied to images with galaxy flux sensitivities of mI N 23, is fairly well-matched to discovering z 5 1 clusters detected by wavelets in ROSAT PSPC exposures of 8000-60,000 s. The difference in the spurious fractions between the optical and X-ray (30%) and IO%, respectively) cannot account for the difference in source number. In Paper I, we compared the optical and X-ray cluster luminosity functions and we found that the luminosity functions are consistent if the relationship between X-ray and optical luminosities is steep (Lx o( L&f). Here, in Paper 11, we present the cluster catalogs and a numerical simulation of the ROXS. We also present color-magnitude plots for several of the cluster candidates, and examine the prominence of the red sequence in each. We find that the X-ray clusters in our survey do not all have a prominent red sequence. We conclude that while the red sequence may be a distinct feature in the color-magnitude plots for virialized massive clusters, it may be less distinct in lower mass clusters of galaxies at even moderate redshifts. Multiple, complementary methods of selecting and defining clusters may be essential, particularly at high redshift where all methods start to run into completeness limits, incomplete understanding of physical evolution, and projection effects.

  6. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  7. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  8. Optimizing Algorithm Choice for Metaproteomics: Comparing X!Tandem and Proteome Discoverer for Soil Proteomes

    NASA Astrophysics Data System (ADS)

    Diaz, K. S.; Kim, E. H.; Jones, R. M.; de Leon, K. C.; Woodcroft, B. J.; Tyson, G. W.; Rich, V. I.

    2014-12-01

    The growing field of metaproteomics links microbial communities to their expressed functions by using mass spectrometry methods to characterize community proteins. Comparison of mass spectrometry protein search algorithms and their biases is crucial for maximizing the quality and amount of protein identifications in mass spectral data. Available algorithms employ different approaches when mapping mass spectra to peptides against a database. We compared mass spectra from four microbial proteomes derived from high-organic content soils searched with two search algorithms: 1) Sequest HT as packaged within Proteome Discoverer (v.1.4) and 2) X!Tandem as packaged in TransProteomicPipeline (v.4.7.1). Searches used matched metagenomes, and results were filtered to allow identification of high probability proteins. There was little overlap in proteins identified by both algorithms, on average just ~24% of the total. However, when adjusted for spectral abundance, the overlap improved to ~70%. Proteome Discoverer generally outperformed X!Tandem, identifying an average of 12.5% more proteins than X!Tandem, with X!Tandem identifying more proteins only in the first two proteomes. For spectrally-adjusted results, the algorithms were similar, with X!Tandem marginally outperforming Proteome Discoverer by an average of ~4%. We then assessed differences in heat shock proteins (HSP) identification by the two algorithms by BLASTing identified proteins against the Heat Shock Protein Information Resource, because HSP hits typically account for the majority signal in proteomes, due to extraction protocols. Total HSP identifications for each of the 4 proteomes were approximately ~15%, ~11%, ~17%, and ~19%, with ~14% for total HSPs with redundancies removed. Of the ~15% average of proteins from the 4 proteomes identified as HSPs, ~10% of proteins and spectra were identified by both algorithms. On average, Proteome Discoverer identified ~9% more HSPs than X!Tandem.

  9. Effects of locomotor skill program on minority preschoolers' physical activity levels.

    PubMed

    Alhassan, Sofiya; Nwaokelemeh, Ogechi; Ghazarian, Manneh; Roberts, Jasmin; Mendoza, Albert; Shitole, Sanyog

    2012-08-01

    This pilot study examined the effects of a teacher-taught, locomotor skill (LMS)-based physical activity (PA) program on the LMS and PA levels of minority preschooler-aged children. Eight low-socioeconomic status preschool classrooms were randomized into LMS-PA (LMS-oriented lesson plans) or control group (supervised free playtime). Interventions were delivered for 30 min/day, five days/week for six months. Changes in PA (accelerometer) and LMS variables were assessed with MANCOVA. LMS-PA group exhibited a significant reduction in during-preschool (F (1,16) = 6.34, p = .02, d = 0.02) and total daily (F (1,16) = 9.78, p = .01, d = 0.30) percent time spent in sedentary activity. LMS-PA group also exhibited significant improvement in leaping skills, F (1, 51) = 7.18, p = .01, d = 0.80). No other, significant changes were observed. The implementation of a teacher-taught, LMS-based PA program could potentially improve LMS and reduce sedentary time of minority preschoolers.

  10. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  11. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  12. A Robust Approach For Acoustic Noise Suppression In Speech Using ANFIS

    NASA Astrophysics Data System (ADS)

    Martinek, Radek; Kelnar, Michal; Vanus, Jan; Bilik, Petr; Zidek, Jan

    2015-11-01

    The authors of this article deals with the implementation of a combination of techniques of the fuzzy system and artificial intelligence in the application area of non-linear noise and interference suppression. This structure used is called an Adaptive Neuro Fuzzy Inference System (ANFIS). This system finds practical use mainly in audio telephone (mobile) communication in a noisy environment (transport, production halls, sports matches, etc). Experimental methods based on the two-input adaptive noise cancellation concept was clearly outlined. Within the experiments carried out, the authors created, based on the ANFIS structure, a comprehensive system for adaptive suppression of unwanted background interference that occurs in audio communication and degrades the audio signal. The system designed has been tested on real voice signals. This article presents the investigation and comparison amongst three distinct approaches to noise cancellation in speech; they are LMS (least mean squares) and RLS (recursive least squares) adaptive filtering and ANFIS. A careful review of literatures indicated the importance of non-linear adaptive algorithms over linear ones in noise cancellation. It was concluded that the ANFIS approach had the overall best performance as it efficiently cancelled noise even in highly noise-degraded speech. Results were drawn from the successful experimentation, subjective-based tests were used to analyse their comparative performance while objective tests were used to validate them. Implementation of algorithms was experimentally carried out in Matlab to justify the claims and determine their relative performances.

  13. Mocetinostat combined with gemcitabine for the treatment of leiomyosarcoma: Preclinical correlates

    PubMed Central

    Braggio, Danielle; Zewdu, Abeba; Casadei, Lucia; Batte, Kara; Bid, Hemant Kumar; Koller, David; Yu, Peter; Iwenofu, Obiajulu Hans; Strohecker, Anne; Choy, Edwin; Lev, Dina; Pollock, Raphael

    2017-01-01

    Leiomyosarcoma (LMS) is a malignant soft tissue sarcoma (STS) with a dismal prognosis following metastatic disease. Chemotherapeutic intervention has demonstrated to have modest clinical efficacy with no curative potential in LMS patients. Previously, we demonstrated pan-HDAC inhibition to have a superior effect in various complex karyotypic sarcomas. In this study, our goal is to evaluate the therapeutic efficacy of mocetinostat alone and in combination with gemcitabine in LMS. Human leiomyosarcoma (LMS) cell lines were used for in vitro and in vivo studies. Compounds tested included the class I HDAC inhibitor, mocetinostat, and nucleoside analog, gemcitabine. MTS and clonogenic assays were used to evaluate the effect of mocetinostat on LMS cell growth. Cleaved caspase 3/7 analysis was used to determine the effects of mocetinostat on apoptosis. Compusyn software was used to determine in vitro synergy studies for the combination of mocetinostat plus gemcitabine. A LMS xenograft model in SCID mice was used to test the impact of mocetinostat alone, gemcitabine alone and the combination of mocetinostat plus gemcitabine. Mocetinostat abrogated LMS cell growth and clonogenic potential, and enhanced apoptosis in LMS cell lines. The combination of mocetinostat plus gemcitabine exhibited a synergistic effect in LMS cells in vitro. Similarly, mocetinostat combined with gemcitabine resulted in superior anti-LMS effects in vivo. Mocetinostat reduced the expression of gemcitabine-resistance markers RRM1, RRM2, and increased the expression of gemcitabine-sensitivity marker, hENT1, in LMS cells. LMS are aggressive, metastatic tumors with poor prognosis where effective therapeutic interventions are wanting. Our studies demonstrate the potential utility of mocetinostat combined with gemcitabine for the treatment of LMS. PMID:29186204

  14. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    NASA Astrophysics Data System (ADS)

    Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.

    2008-05-01

    High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.

  15. Active Narrow-Band Vibration Isolation of Large Engineering Structures

    NASA Technical Reports Server (NTRS)

    Rahman, Zahidul; Spanos, John

    1994-01-01

    We present a narrow-band tracking control method using a variant of the Least Mean Squares (LMS) algorithm to isolate slowly changing periodic disturbances from engineering structures. The advantage of the algorithm is that it has a simple architecture and is relatively easy to implement while it can isolate disturbances on the order of 40-50 dB over decades of frequency band. We also present the results of an experiment conducted on a flexible truss structure. The average disturbance rejection achieved is over 40 dB over the frequency band of 5 Hz to 50 Hz.

  16. Electronics design of the airborne stabilized platform attitude acquisition module

    NASA Astrophysics Data System (ADS)

    Xu, Jiang; Wei, Guiling; Cheng, Yong; Li, Baolin; Bu, Hongyi; Wang, Hao; Zhang, Zhanwei; Li, Xingni

    2014-02-01

    We present an attitude acquisition module electronics design for the airborne stabilized platform. The design scheme, which is based on Integrated MEMS sensor ADIS16405, develops the attitude information processing algorithms and the hardware circuit. The hardware circuits with a small volume of only 44.9 x 43.6 x 24.6 mm3, has the characteristics of lightweight, modularization and digitalization. The interface design of the PC software uses the combination plane chart with track line to receive the attitude information and display. Attitude calculation uses the Kalman filtering algorithm to improve the measurement accuracy of the module in the dynamic environment.

  17. Language Classification using N-grams Accelerated by FPGA-based Bloom Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, A; Gokhale, M

    N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.

  18. Resistive edge mode instability in stellarator and tokamak geometries

    NASA Astrophysics Data System (ADS)

    Mahmood, M. Ansar; Rafiq, T.; Persson, M.; Weiland, J.

    2008-09-01

    Geometrical effects on linear stability of electrostatic resistive edge modes are investigated in the three-dimensional Wendelstein 7-X stellarator [G. Grieger et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (International Atomic Energy Agency, Vienna, 1991), Vol. 3, p. 525] and the International Thermonuclear Experimental Reactor [Progress in the ITER Physics Basis, Nucl. Fusion 7, S1, S285 (2007)]-like equilibria. An advanced fluid model is used for the ions together with the reduced Braghinskii equations for the electrons. Using the ballooning mode representation, the drift wave problem is set as an eigenvalue equation along a field line and is solved numerically using a standard shooting technique. A significantly larger magnetic shear and a less unfavorable normal curvature in the tokamak equilibrium are found to give a stronger finite-Larmor radius stabilization and a more narrow mode spectrum than in the stellarator. The effect of negative global magnetic shear in the tokamak is found to be stabilizing. The growth rate on a tokamak magnetic flux surface is found to be comparable to that on a stellarator surface with the same global magnetic shear but the eigenfunction in the tokamak is broader than in the stellarator due to the presence of large negative local magnetic shear (LMS) on the tokamak surface. A large absolute value of the LMS in a region of unfavorable normal curvature is found to be stabilizing in the stellarator, while in the tokamak case, negative LMS is found to be stabilizing and positive LMS destabilizing.

  19. An improved three-dimension reconstruction method based on guided filter and Delaunay

    NASA Astrophysics Data System (ADS)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  20. Comparison between retroperitoneal leiomyosarcoma and dedifferentiated liposarcoma.

    PubMed

    Ishii, Takeaki; Kohashi, Kenichi; Ootsuka, Hiroshi; Iura, Kunio; Maekawa, Akira; Yamada, Yuichi; Bekki, Hirofumi; Yoshimoto, Masato; Yamamoto, Hidetaka; Iwamoto, Yukihide; Oda, Yoshinao

    2017-06-01

    It is important to distinguish between leiomyosarcoma (LMS) and dedifferentiated liposarcoma (DDLS) in the retroperitoneum. The dedifferentiated component of DDLS shows an LMS-like morphology in some cases; thus, detailed evaluation is necessary to achieve an accurate diagnosis. Immunohistochemically, MDM2 and myogenic markers provide clues for the diagnoses. However, immunoreactivity for MDM2 and myogenic markers has not been well studied in retroperitoneal LMS and DDLS. Here, we compared the clinicopathological data of 20 retroperitoneal tumors initially diagnosed as LMS with that of 36 cases of retroperitoneal DDLS and conducted an immunohistochemical study. Four (20%) of the cases initially diagnosed as LMS were immunoreactive for MDM2. Fifteen cases (41.7%) of DDLS showed positive expression of two or more myogenic markers. The patients with LMS with MDM2 overexpression were older than the patients with LMS without MDM2 overexpression (P=0.0328). LMS with MDM2 overexpression showed a worse prognosis than DDLS (P=0.0408). No significant difference in prognosis was found between LMS without MDM2 overexpression and DDLS with myogenic differentiation. In conclusion, we recommend that systemic MDM2 expression analysis be performed in cases of retroperitoneal sarcoma. Overdependence on the expression of myogenic markers could lead to misdiagnosis in distinguishing LMS from DDLS. Copyright © 2017 Elsevier GmbH. All rights reserved.

  1. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  2. Parallel Vision Algorithm Design and Implementation 1988 End of Year Report

    DTIC Science & Technology

    1989-08-01

    as a local operation, the provided C code used raster order processing to speed up execution time. This made it impossible to implement the code using...Apply, which does not allow the programmer to take advantage of raster order processing . Therefore, the 5x5 median filter algorithm was a straight...possible to exploit raster- order processing in W2, giving greater efficiency. The first advantage is the reason that connected components and the Hough

  3. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    PubMed

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  4. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790

  5. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  6. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  7. Athena X-IFU event reconstruction software: SIRENA

    NASA Astrophysics Data System (ADS)

    Ceballos, Maria Teresa; Cobo, Beatriz; Peille, Philippe; Wilms, Joern; Brand, Thorsten; Dauser, Thomas; Bandler, Simon; Smith, Stephen

    2015-09-01

    This contribution describes the status and technical details of the SIRENA package, the software currently in development to perform the on board event energy reconstruction for the Athena calorimeter X-IFU. This on board processing will be done in the X-IFU DRE unit and it will consist in an initial triggering of event pulses followed by an analysis (with the SIRENA package) to determine the energy content of such events.The current algorithm used by SIRENA is the optimal filtering technique (also used by ASTRO-H processor) although some other algorithms are also being tested.Here we present these studies and some preliminary results about the energy resolution of the instrument based on simulations done with the SIXTE simulator (http://www.sternwarte.uni-erlangen.de/research/sixte/) in which SIRENA is integrated.

  8. Down-regulation of polycystin in lymphatic malformations: possible role in the proliferation of lymphatic endothelial cells.

    PubMed

    Ren, Jian-Gang; Xia, Hou-Fu; Yang, Jie-Gang; Zhu, Jun-Yi; Zhang, Wei; Chen, Gang; Zhao, Ji-Hong; Sun, Yan-Fang; Zhao, Yi-Fang

    2017-07-01

    Lymphatic malformations (LMs) are composed of aberrant lymphatic vessels and regarded as benign growths of the lymphatic system. Recent studies have demonstrated that the mutant embryos of PKD1 and PKD2, encoding polycystin-1 (PC-1) and polycystin-2 (PC-2), respectively, result in aberrant lymphatic vessels similar to those observed in LMs. In this study, for the first time, we investigated PC-1 and PC-2 expression and assessed their roles in the development of LMs. Our results demonstrated that PC-1 and PC-2 gene and protein expressions were obviously decreased in LMs compared with normal skin tissues. In addition, the expression of phosphorylated ERK but not total ERK was up-regulated in LMs and negatively correlated with the expression of PC-1 and PC-2. Moreover, up-regulation of Ki67 was detected in LMs and positively correlated with ERK phosphorylation levels. Furthermore, cluster analysis better reflected close correlation between these signals. All of the above results provided strong evidence suggesting that the hyperactivation of the ERK pathway may be caused by down-regulation of PC-1 and PC-2 in LMs, contributing to increased proliferation of lymphatic endothelial cells in LMs. Our present study sheds light on novel potential mechanisms involved in LMs and may help to explore novel treatments for LMs. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.

    PubMed

    Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2017-01-01

    Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.

  10. Experimental evaluation of ALS point cloud ground extraction over different land cover in the Malopolska Province

    NASA Astrophysics Data System (ADS)

    Korzeniowska, Karolina; Mandlburger, Gottfried; Klimczyk, Agata

    2013-04-01

    The paper presents an evaluation of different terrain point extraction algorithms for Airborne Laser Scanning (ALS) point clouds. The research area covers eight test sites in the Małopolska Province (Poland) with varying point density between 3-15points/m² and surface as well as land cover characteristics. In this paper the existing implementations of algorithms were considered. Approaches based on mathematical morphology, progressive densification, robust surface interpolation and segmentation were compared. From the group of morphological filters, the Progressive Morphological Filter (PMF) proposed by Zhang K. et al. (2003) in LIS software was evaluated. From the progressive densification filter methods developed by Axelsson P. (2000) the Martin Isenburg's implementation in LAStools software (LAStools, 2012) was chosen. The third group of methods are surface-based filters. In this study, we used the hierarchic robust interpolation approach by Kraus K., Pfeifer N. (1998) as implemented in SCOP++ (Trimble, 2012). The fourth group of methods works on segmentation. From this filtering concept the segmentation algorithm available in LIS was tested (Wichmann V., 2012). The main aim in executing the automatic classification for ground extraction was operating in default mode or with default parameters which were selected by the developers of the algorithms. It was assumed that the default settings were equivalent to the parameters on which the best results can be achieved. In case it was not possible to apply an algorithm in default mode, a combination of the available and most crucial parameters for ground extraction were selected. As a result of these analyses, several output LAS files with different ground classification were achieved. The results were described on the basis of qualitative and quantitative analyses, both being in a formal description. The classification differences were verified on point cloud data. Qualitative verification of ground extraction was made on the basis of a visual inspection of the results (Sithole G., Vosselman G., 2004; Meng X. et al., 2010). The results of these analyses were described as a graph using weighted assumption. The quantitative analyses were evaluated on a basis of Type I, Type II and Total errors (Sithole G., Vosselman G., 2003). The achieved results show that the analysed algorithms yield different classification accuracies depending on the landscape and land cover. The simplest terrain for ground extraction was flat rural area with sparse vegetation. The most difficult were mountainous areas with very dense vegetation where only a few ground points were available. Generally the LAStools algorithm gives good results in every type of terrain, but the ground surface is too smooth. The LIS Progressive Morphological Filter algorithm gives good results in forested flat and low slope areas. The surface-based algorithm from SCOP++ gives good results in mountainous areas - both forested and built-up because it better preserves steep slopes, sharp ridges and breaklines, but sometimes it fails to remove off-terrain objects from the ground class. The segmentation-based algorithm in LIS gives quite good results in built-up flat areas, but in forested areas it does not work well. Bibliography: Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIII (Pt. B4/1), 110- 117 Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS Journal of Photogrammetry & Remote Sensing 53 (4), 193-203 LAStools website http://www.cs.unc.edu/~isenburg/lastools/ (verified in September 2012) Meng, X., Currit, N., Zhao, K., 2010. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sensing 2, 833-860 Sithole, G., Vosselman, G., 2003. Report: ISPRS Comparison of Filters. Commission III, Working Group 3. Department of Geodesy, Faculty of Civil Engineering and Geosciences, Delft University of technology, The Netherlands Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for bare-Earth extraction form airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85-101 Trimble, 2012 http://www.trimble.com/geospatial/aerial-software.aspx (verified in November 2012) Wichmann, V., 2012. LIS Command Reference, LASERDATA GmbH, 1-231 Zhang, K., Chen, S.-C., Whitman, D., Shyu, M.-L., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing non-ground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing, 41(4), 872-882

  11. Transfer of Materials from Water to Solid Surfaces Using Liquid Marbles.

    PubMed

    Kawashima, Hisato; Paven, Maxime; Mayama, Hiroyuki; Butt, Hans-Jürgen; Nakamura, Yoshinobu; Fujii, Syuji

    2017-09-27

    Remotely controlling the movement of small objects is desirable, especially for the transportation and selection of materials. Transfer of objects between liquid and solid surfaces and triggering their release would allow for development of novel material transportation technology. Here, we describe the remote transport of a material from a water film surface to a solid surface using quasispherical liquid marbles (LMs). A light-induced Marangoni flow or an air stream is used to propel the LMs on water. As the LMs approach the rim of the water film, gravity forces them to slide down the water rim and roll onto the solid surface. Through this method, LMs can be efficiently moved on water and placed on a solid surface. The materials encapsulated within LMs can be released at a specific time by an external stimulus. We analyzed the velocity, acceleration, and force of the LMs on the liquid and solid surfaces. On water, the sliding friction due to the drag force resists the movement of the LMs. On a solid surface, the rolling distance is affected by the surface roughness of the LMs.

  12. Establishment and proteomic characterization of NCC-LMS1-C1, a novel cell line of primary leiomyosarcoma of the bone.

    PubMed

    Sakumoto, Marimu; Takahashi, Mami; Oyama, Rieko; Takai, Yoko; Kito, Fusako; Shiozawa, Kumiko; Qiao, Zhiwei; Yoshida, Akihiko; Endo, Makoto; Kawai, Akira; Kondo, Tadashi

    2017-10-01

    Leiomyosarcoma (LMS) is one of most aggressive mesenchymal malignancies that differentiate towards smooth muscle. The clinical outcome of LMS patients is poor; as such, there is an urgent need for novel therapeutic approaches. Experimental models such as patient-derived cell lines are invaluable tools for pre-clinical studies. In the present study, we established a stable cell line from the tumor tissue of a patient with a primary LMS of the bone. Despite the urgent need for novel therapeutic strategies in LMS, there are only a few LMS cell lines available in public cell banks, none of which are primary to the bone. Bone primary LMS tumor tissues were sampled to establish cell lines. Morphological and proteomic analyses were performed and sensitivity to pazopanib was evaluated. NCC-LMS1-C1 cells were maintained for over 100 passages. The cells exhibited a spindle shape and aggressive growth; they also expressed smooth muscle actin, reflecting the original LMS tissue (i.e. smooth muscle cells). The cells also showed tumor characteristics such as colony formation on soft agar and sensitivity to pazopanib, doxorubicin and cisplatin, with half-maximal inhibitory concentrations of 4.5, 0.11 and 20 μM, respectively. Proteomic analyses by mass spectrometry and antibody array revealed some differences in the protein expression profiles of these cells as compared to the original tumor tissue. Our results indicate that the NCC-LMS1-C1 cell lines will be useful for LMS research. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. SU-E-I-37: Low-Dose Real-Time Region-Of-Interest X-Ray Fluoroscopic Imaging with a GPU-Accelerated Spatially Different Bilateral Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H; Lee, J; Pua, R

    2014-06-01

    Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less

  14. Tomographic Neutron Imaging using SIRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregor, Jens; FINNEY, Charles E A; Toops, Todd J

    2013-01-01

    Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.

  15. LMS Use and Instructor Performance: The Role of Task-Technology Fit

    ERIC Educational Resources Information Center

    McGill, Tanya; Klobas, Jane; Renzi, Stefano

    2011-01-01

    The introduction of learning management systems (LMS) has changed the way in which instructors work. This paper uses Goodhue and Thompson's (1995) technology-to-performance chain (TPC) to explore the roles of task-technology fit (TTF) and level of LMS use in the performance impacts of LMS for instructors. A mixed method approach was used: an…

  16. 47 CFR 90.353 - LMS operations in the 902-928 MHz band.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... band. (b) LMS systems are authorized to transmit status and instructional messages, either voice or non-voice, so long as they are related to the location or monitoring functions of the system. (c) LMS... subparts B and C of this part. (d) Multilateration LMS systems will be authorized on a primary basis within...

  17. 47 CFR 90.353 - LMS operations in the 902-928 MHz band.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... band. (b) LMS systems are authorized to transmit status and instructional messages, either voice or non-voice, so long as they are related to the location or monitoring functions of the system. (c) LMS... subparts B and C of this part. (d) Multilateration LMS systems will be authorized on a primary basis within...

  18. 47 CFR 90.353 - LMS operations in the 902-928 MHz band.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... band. (b) LMS systems are authorized to transmit status and instructional messages, either voice or non-voice, so long as they are related to the location or monitoring functions of the system. (c) LMS... subparts B and C of this part. (d) Multilateration LMS systems will be authorized on a primary basis within...

  19. 47 CFR 90.353 - LMS operations in the 902-928 MHz band.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... band. (b) LMS systems are authorized to transmit status and instructional messages, either voice or non-voice, so long as they are related to the location or monitoring functions of the system. (c) LMS... subparts B and C of this part. (d) Multilateration LMS systems will be authorized on a primary basis within...

  20. 47 CFR 90.353 - LMS operations in the 902-928 MHz band.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... band. (b) LMS systems are authorized to transmit status and instructional messages, either voice or non-voice, so long as they are related to the location or monitoring functions of the system. (c) LMS... subparts B and C of this part. (d) Multilateration LMS systems will be authorized on a primary basis within...

  1. Novel X-ray Communication Based XNAV Augmentation Method Using X-ray Detectors

    PubMed Central

    Song, Shibin; Xu, Luping; Zhang, Hua; Bai, Yuanjie

    2015-01-01

    The further development of X-ray pulsar-based NAVigation (XNAV) is hindered by its lack of accuracy, so accuracy improvement has become a critical issue for XNAV. In this paper, an XNAV augmentation method which utilizes both pulsar observation and X-ray ranging observation for navigation filtering is proposed to deal with this issue. As a newly emerged concept, X-ray communication (XCOM) shows great potential in space exploration. X-ray ranging, derived from XCOM, could achieve high accuracy in range measurement, which could provide accurate information for XNAV. For the proposed method, the measurement models of pulsar observation and range measurement observation are established, and a Kalman filtering algorithm based on the observations and orbit dynamics is proposed to estimate the position and velocity of a spacecraft. A performance comparison of the proposed method with the traditional pulsar observation method is conducted by numerical experiments. Besides, the parameters that influence the performance of the proposed method, such as the pulsar observation time, the SNR of the ranging signal, etc., are analyzed and evaluated by numerical experiments. PMID:26404295

  2. A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.

    PubMed

    Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H

    2017-01-01

    Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.

  3. Fast ultrasound-assisted synthesis of Li2MnSiO4 nanoparticles for a lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Hwang, Chahwan; Kim, Taejin; Shim, Joongpyo; Kwak, Kyungwon; Ok, Kang Min; Lee, Kyung-Koo

    2015-10-01

    High-capacity Li2MnSiO4/C (LMS/C MBS) nanoparticles have been prepared using sonochemistry under a multibubble sonoluminescence (MBS) condition, and their physical and electrochemical properties were characterized. The results show that LMS/C MBS nanoparticles exhibit a nearly pure crystalline phase with orthorhombic structure and have a spherical shape and a uniform particle size distribution centered at a diameter of 22.5 nm. Galvanostatic charge-discharge measurements reveal that LMS/C MBS delivers an initial discharge capacity of about 260 mA h g-1 at a current rate of 16.5 mA g-1 in the voltage range of 1.5-4.8 V (vs. Li/Li+), while LMS MBS (LMS without a carbon source under MBS) and LMS/C SG (LMS with a carbon source using the conventional sol-gel method) possess lower capacities of 168 and 9 mA h g-1, respectively. The improved electrochemical performance of LMS/C MBS can be ascribed to the uniform nanoparticle size, mesoporous structure, and in-situ carbon coating, which can enhance the electronic conductivity as well as the lithium ion diffusion coefficient.

  4. Hyper-X Mach 10 Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.

    2005-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.

  5. Deconvolution by Homomorphic and Wiener Filtering

    DTIC Science & Technology

    1988-09-01

    XR?(e-j) =XR(e. 7w), XJ(e-jw) -Xei) and 4 ej) -Re’) ,ej)X~i) d+ ar g [X(e l a)[ ( Assumnption: both X(z) and X(z) are analytic in a region included the...consistent if 3k(wi+,) E IJ" g (X(ej"’+’)/wiJ - Arg[X(ej’")] + 2irk(wi+t ) < THLDI < r. The idea of the algorithm is to adapt the step size Aw imtil the phase...given by arg X(ejw) = S(eJw) dw, or, according to [141 ’ G " g [X(e-’÷.’)/T ,= argX(ejw) + LAW d argX(e(’+ )+ dargX(ei’) 1T2 where d-S(e’) = argX(e3d

  6. A rare case of leiomyosarcoma originating from the left round ligament of the uterus.

    PubMed

    Kaba, Metin; Tokmak, Aytekin; Timur, Hakan; Özdal, Bülent; Şirvan, Levent; Güngör, Tayfun

    2016-07-01

    Uterine leiomyosarcomas (LMS) are rare malignancies with a poor prognosis. The incidence is reported to be 3-7/100.000 per year. Preoperative and intraoperative differentiation between LMS and large leiomyoma is always challenging. Therefore, LMS are often diagnosed during postoperative histologic evaluation of hysterectomy or myomectomy specimens. LMS of the round ligament of the uterus which can represent as an inguinal or pelvic mass is extremely rare. To our knowledge, there is only one case report of LMS arising from the round ligament available in the literature. Herein, we aimed to present the second case of LMS originating from the left round ligament of the uterus in a premenopausal woman initially misdiagnosed as an ovarian tumor. © 2016 Old City Publishing, Inc.

  7. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  8. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  9. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.

    2011-11-15

    Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT.« less

  10. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  11. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  12. Detector sustainability improvements at LCLS

    NASA Astrophysics Data System (ADS)

    Browne, Michael C.; Carini, Gabriella; DePonte, Daniel P.; Galtier, Eric C.; Hart, Philip A.; Koralek, J. D.; Mitra, Ankush; Nakahara, Kazutaka

    2017-06-01

    The Linac Coherent Light Source (LCLS) poses a number of daunting and often unusual challenges to maintaining X-ray detectors, such as proximity to liquid-sample injectors, complex setups with moving components, intense X-ray and optical laser light, and Electromagnetic Pulse (EMP). The Detector and Sample Environment departments at LCLS are developing an array of engineering, monitoring, and administrative controls solutions to better address these issues. These include injector improvements and monitoring methods, fast online damage recognition algorithms, EMP mapping and protection, actively cooled filters, and more.

  13. Intelligent Control for Drag Reduction on the X-48B Vehicle

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Brown, Nelson Andrew; Yoo, Seung Yeun

    2011-01-01

    This paper focuses on the development of an intelligent control technology for in-flight drag reduction. The system is integrated with and demonstrated on the full X-48B nonlinear simulation. The intelligent control system utilizes a peak-seeking control method implemented with a time-varying Kalman filter. Performance functional coordinate and magnitude measurements, or independent and dependent parameters respectively, are used by the Kalman filter to provide the system with gradient estimates of the designed performance function which is used to drive the system toward a local minimum in a steepestdescent approach. To ensure ease of integration and algorithm performance, a single-input single-output approach was chosen. The framework, specific implementation considerations, simulation results, and flight feasibility issues related to this platform are discussed.

  14. Fourier transform wavefront control with adaptive prediction of the atmosphere.

    PubMed

    Poyneer, Lisa A; Macintosh, Bruce A; Véran, Jean-Pierre

    2007-09-01

    Predictive Fourier control is a temporal power spectral density-based adaptive method for adaptive optics that predicts the atmosphere under the assumption of frozen flow. The predictive controller is based on Kalman filtering and a Fourier decomposition of atmospheric turbulence using the Fourier transform reconstructor. It provides a stable way to compensate for arbitrary numbers of atmospheric layers. For each Fourier mode, efficient and accurate algorithms estimate the necessary atmospheric parameters from closed-loop telemetry and determine the predictive filter, adjusting as conditions change. This prediction improves atmospheric rejection, leading to significant improvements in system performance. For a 48x48 actuator system operating at 2 kHz, five-layer prediction for all modes is achievable in under 2x10(9) floating-point operations/s.

  15. Transmission of broad W/Rh and W/Al (target/filter) x-ray beams operated at 25-49 kVp through common shielding materials.

    PubMed

    Li, Xinhua; Zhang, Da; Liu, Bob

    2012-07-01

    To provide transmission data for broad 25-39 kVp (kilovolt peak) W/Rh and 25-49 kVp W/Al (target/filter, W-tungsten, Rh-rhodium, and Al-aluminum) x-ray beams through common shielding materials, such as lead, concrete, gypsum wallboard, wood, steel, and plate glass. The unfiltered W-target x-ray spectra measured on a Selenia Dimensions system (Hologic Inc., Bedford, MA) set at 20-49 kVp were, respectively, filtered using 50-μm Rh and 700-μm Al, and were subsequently used for Monte Carlo calculations. The transmission of broad x-ray beams through shielding materials was simulated using Geant4 low energy electromagnetic physics package with photon- and electron-processes above 250 eV, including photoelectric effect, Compton scattering, and Rayleigh scattering. The calculated transmission data were fitted using Archer equation with a robust fitting algorithm. The transmission of broad x-ray beams through the above-mentioned shielding materials was calculated down to about 10(-5) for 25-39 kVp W/Rh and 25-49 kVp W/Al. The fitted results of α, β, and γ in Archer equation were provided. The α values of kVp ≥ 40 were approximately consistent with those of NCRP Report No. 147. These data provide inputs for the shielding designs of x-ray imaging facilities with W-anode x-ray beams, such as from Selenia Dimensions.

  16. Atypical Uterine Smooth Muscle Tumors: A Retrospective Evaluation of Clinical and Pathologic Features.

    PubMed

    Maltese, Giuseppa; Fontanella, Caterina; Lepori, Stefano; Scaffa, Cono; Fucà, Giovanni; Bogani, Giorgio; Provenzano, Salvatore; Carcangiu, Maria Luisa; Raspagliesi, Francesco; Lorusso, Domenica

    2018-01-01

    Clinical characteristics combined with new biomarkers help discriminate between atypical uterine smooth muscle tumors (AUSMT) and leiomyosarcomas (LMS). We retrospectively collected a series of leiomyomas (LM), AUSMT, and LMS. Estrogen receptors (ER), progesterone receptors (PR), p16, Ki-67, and p53 expression were assessed by immunohistochemistry. For AUSMT patients, immunohistochemistry evaluations were performed at the time of diagnosis and at recurrences. A total of 27 cases of AUSMT, 22 LM, and 31 LMS were identified. The expression of ER and PR decreased from LM to LMS (ER+: LM 95.5%, AUSMT 88.9%, LMS 41.9%, p < 0.001; PR+: LM 100%, AUSMT 88.9%, LMS 38.2%, p = 0.002). By contrast, p16 and p53 expression increased (p16+: LM 4.5%, AUSMT 40.7%, LMS 45.2%, p = 0.004; p53: LM 9.1%, AUSMT 33.3%, LMS 58.1%, p = 0.001). At a median follow-up of 33.47 months, 40.7% of patients with AUSMT experienced recurrent disease, 6 patients relapsed as AUSMT and 5 as LMS. In univariate analysis was observed that ER status (p = 0.027) and p53 expression (p = 0.015) predicted risk of relapse. Treatment of AUSMT should be centralized in dedicated centers. International collaborations are needed to optimize research strategy, which may lead to the identification of new useful biomarkers and to improvement in the clinical management of this rare disease. © 2017 S. Karger AG, Basel.

  17. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  18. Leiomyosarcoma: One disease or distinct biologic entities based on site of origin?

    PubMed

    Worhunsky, David J; Gupta, Mihir; Gholami, Sepideh; Tran, Thuy B; Ganjoo, Kristen N; van de Rijn, Matt; Visser, Brendan C; Norton, Jeffrey A; Poultsides, George A

    2015-06-01

    Leiomyosarcoma (LMS) can originate from the retroperitoneum, uterus, extremity, and trunk. It is unclear whether tumors of different origin represent discrete entities. We compared clinicopathologic features and outcomes following surgical resection of LMS stratified by site of origin. Patients with LMS undergoing resection at a single institution were retrospectively reviewed. Clinicopathologic variables were compared across sites. Survival was calculated using the Kaplan-Meier method and compared using log-rank and Cox regression analyses. From 1983 to 2011, 138 patients underwent surgical resection for LMS. Retroperitoneal and uterine LMS were larger, higher grade, and more commonly associated with synchronous metastases. However, disease-specific survival, recurrence-free survival, and recurrence patterns were not significantly different across the four sites. Synchronous metastases (HR 3.20, P < 0.001), but not site of origin, size, grade, or margin status, were independently associated with worse DSS. A significant number of recurrences and disease-related deaths were noted beyond 5 years. Although larger and higher grade, retroperitoneal and uterine LMS share similar survival and recurrence patterns with their trunk and extremity counterparts. LMS of various anatomic sites may not represent distinct disease processes based on clinical outcomes. The presence of metastatic disease remains the most important prognostic factor for LMS. © 2015 Wiley Periodicals, Inc.

  19. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  20. Estimation of glacier surface motion by robust phase correlation and point like features of SAR intensity images

    NASA Astrophysics Data System (ADS)

    Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe

    2016-11-01

    For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.

  1. Using the Leitz LMS 2000 for monitoring and improvement of an e-beam

    NASA Astrophysics Data System (ADS)

    Blaesing-Bangert, Carola; Roeth, Klaus-Dieter; Ogawa, Yoichi

    1994-11-01

    Kaizen--a continuously improving--is a philosophy lived in Japan which is also becoming more and more important in Western companies. To implement this philosophy in the semiconductor industry, a high performance metrology tool is essential to determine the status of production quality periodically. An important prerequisite for statistical process control is the high stability of the metrology tool over several months or years; the tool-induced shift should be as small as possible. The pattern placement metrology tool Leitz LMS 2000 has been used in a major European mask house for several years now to qualify masks within the tightest specifications and to monitor the MEBES III and its cassettes. The mask shop's internal specification for the long term repeatability of the pattern placement metrology tool is 19 nm instead of 42 nm as specified by the supplier of the tool. Then the process capability of the LMS 2000 over 18 months is represented by an average cpk value of 2.8 for orthogonality, 5.2 for x-scaling, and 3.0 for y-scaling. The process capability of the MEBES III and its cassettes was improved in the past years. For instance, 100% of the masks produced with a process tolerance of +/- 200 nm are now within this limit.

  2. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  3. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  4. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  5. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  6. Effects of magnetometer calibration and maneuvers on accuracies of magnetometer-only attitude-and-rate determination

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.

    1998-01-01

    Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.

  7. A splice donor mutation in NAA10 results in the dysregulation of the retinoic acid signaling pathway and causes Lenz microphthalmia syndrome

    PubMed Central

    Esmailpour, Taraneh; Riazifar, Hamidreza; Liu, Linan; Donkervoort, Sandra; Huang, Vincent H; Madaan, Shreshtha; Shoucri, Bassem M; Busch, Anke; Wu, Jie; Towbin, Alexander; Chadwick, Robert B; Sequeira, Adolfo; Vawter, Marquis P; Sun, Guoli; Johnston, Jennifer J; Biesecker, Leslie G; Kawaguchi, Riki; Sun, Hui; Kimonis, Virginia; Huang, Taosheng

    2014-01-01

    Introduction Lenz microphthalmia syndrome (LMS) is a genetically heterogeneous X-linked disorder characterised by microphthalmia/anophthalmia, skeletal abnormalities, genitourinary malformations, and anomalies of the digits, ears, and teeth. Intellectual disability and seizure disorders are seen in about 60% of affected males. To date, no gene has been identified for LMS in the microphthalmia syndrome 1 locus (MCOPS1). In this study, we aim to find the disease-causing gene for this condition. Methods and results Using exome sequencing in a family with three affected brothers, we identified a mutation in the intron 7 splice donor site (c.471+2T→A) of the N-acetyltransferase NAA10 gene. NAA10 has been previously shown to be mutated in patients with Ogden syndrome, which is clinically distinct from LMS. Linkage studies for this family mapped the disease locus to Xq27-Xq28, which was consistent with the locus of NAA10. The mutation co-segregated with the phenotype and cDNA analysis showed aberrant transcripts. Patient fibroblasts lacked expression of full length NAA10 protein and displayed cell proliferation defects. Expression array studies showed significant dysregulation of genes associated with genetic forms of anophthalmia such as BMP4, STRA6, and downstream targets of BCOR and the canonical WNT pathway. In particular, STRA6 is a retinol binding protein receptor that mediates cellular uptake of retinol/vitamin A and plays a major role in regulating the retinoic acid signalling pathway. A retinol uptake assay showed that retinol uptake was decreased in patient cells. Conclusions We conclude that the NAA10 mutation is the cause of LMS in this family, likely through the dysregulation of the retinoic acid signalling pathway. PMID:24431331

  8. Embedding a learning management system into an undergraduate medical informatics course in Saudi Arabia: lessons learned.

    PubMed

    Zakaria, Nasriah; Jamal, Amr; Bisht, Shekhar; Koppel, Cristina

    2013-01-01

    Public universities in Saudi Arabia today are making substantial investments in e-learning as part of their educational system, especially in the implementation of learning management systems (LMS). To our knowledge, this is the first study conducted in Saudi Arabia exploring medical students' experience with an LMS, particularly as part of a medical informatics course. This study investigates students' use of various features of the LMS embedded in a recently implemented medical informatics course. A mixed methodology approach was employed. Survey questionnaires were distributed to all third year medical informatics students at the end of the course. In addition, two focus group sessions were conducted with twelve students. A thematic analysis of the focus group was performed. A total of 265 third year medical student surveys (167/265, 63% male and 98/265, 37% female) were completed and analyzed. Overall, 50.6% (134/265) of the students agreed that the course was well planned and up-to-date, had clearly stated objectives and clear evaluation methods, appropriate course assignment, and that the LMS offered easy navigation. Most of the students rated the course as good/fair overall. In general, females were 10.4% more likely to prefer the LMS, as revealed by higher odd ratios (odds ratio [OR] 1.104, 95% CI 0.86-1.42) compared to males. Survey results showed that students' use of LMS tools increased after taking the course compared to before taking the course. The full model containing all items were statistically significant (χ(2) 25=69.52, P<.001, n=243), indicating that the model was able to distinguish between students who had positive attitudes towards LMS and those who did not. The focus group, however, revealed that the students used social networking for general use rather than learning purposes, but they were using other Internet resources and mobile devices for learning. Male students showed a higher preference for using technology in general to enhance learning activities. Overall, medical student attitudes towards the LMS were generally positive. Students also wanted a reminder and notification tool to help them stay updated with course events. Interestingly, a subset of students had been running a parallel LMS of their own that has features worth exploring and could be integrated with an official LMS in the future. To our knowledge, this was the first time that an LMS was used in a medical informatics course. Students showed interest in adapting various LMS tools to enhance their learning and gained more knowledge through familiarity with the tool. Researching an official LMS also revealed the existence of a parallel student-created LMS. This could allow teacher-led and student-led platforms to be integrated in the future for an enhanced student-centered experience.

  9. An Efficient Conflict Detection Algorithm for Packet Filters

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung

    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.

  10. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    PubMed

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-11-01

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Molecular analyses of 6 different types of uterine smooth muscle tumors: Emphasis in atypical leiomyoma.

    PubMed

    Zhang, Qing; Ubago, Julianne; Li, Li; Guo, Haiyang; Liu, Yugang; Qiang, Wenan; Kim, J Julie; Kong, Beihua; Wei, Jian-Jun

    2014-10-15

    Uterine smooth muscle tumors (USMTs) constitute a group of histologic, genetic, and clinical heterogeneous tumors that include at least 6 major histologically defined tumor types: leiomyoma (ULM), mitotically active leiomyoma (MALM), cellular leiomyoma (CLM), atypical leiomyoma (ALM), uncertain malignant potential (STUMP), and leiomyosarcoma (LMS). Apart from ULM and LMS, the nature of these variants is not well defined. A total of 167 cases of different USMT variants were collected, reviewed, and diagnostically confirmed based on the World Health Organization and Stanford schemes. These included 38 cases of LMS, 18 cases of STUMP, 42 cases of ALM, 22 cases of CLM, 7 cases of MALM, and 40 cases of ULM. Molecular analysis included selected microRNAs (miRNAs), oncogenes, and tumor suppressors that are highly relevant to USMT. Overall, 49% (17/35) of LMS cases and 7% (1/14) of STUMP cases died due to their USMT, but no deaths were attributed to ALM. miRNA profiling revealed that ALM and LMS shared similar miRNA signatures. P53 mutations and PTEN deletions were significantly higher in LMS, ALM, and STUMP compared with other USMT variants (P < .01). In contrast, MED12 mutations were extremely common in ULM and MALM (> 74%) but were significantly less common (< 15%) in CLM, ALM, STUMP, and LMS (P < .01). Six types of USMT have different gene mutation fingerprints. ALM shares many molecular alterations with LMS. Our findings suggest that ALM may be a precursor lesion of LMS or have similar genetic changes during its early stage. © 2014 American Cancer Society.

  12. Segmenting texts from outdoor images taken by mobile phones using color features

    NASA Astrophysics Data System (ADS)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  13. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  14. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  15. X-ray computed tomography of wood-adhesive bondlines: Attenuation and phase-contrast effects

    DOE PAGES

    Paris, Jesse L.; Kamke, Frederick A.; Xiao, Xianghui

    2015-07-29

    Microscale X-ray computed tomography (XCT) is discussed as a technique for identifying 3D adhesive distribution in wood-adhesive bondlines. Visualization and material segmentation of the adhesives from the surrounding cellular structures require sufficient gray-scale contrast in the reconstructed XCT data. Commercial wood-adhesive polymers have similar chemical characteristics and density to wood cell wall polymers and therefore do not provide good XCT attenuation contrast in their native form. Here, three different adhesive types, namely phenol formaldehyde, polymeric diphenylmethane diisocyanate, and a hybrid polyvinyl acetate, are tagged with iodine such that they yield sufficient X-ray attenuation contrast. However, phase-contrast effects at material edgesmore » complicate image quality and segmentation in XCT data reconstructed with conventional filtered backprojection absorption contrast algorithms. A quantitative phase retrieval algorithm, which isolates and removes the phase-contrast effect, was demonstrated. The paper discusses and illustrates the balance between material X-ray attenuation and phase-contrast effects in all quantitative XCT analyses of wood-adhesive bondlines.« less

  16. X-ray computed tomography of wood-adhesive bondlines: Attenuation and phase-contrast effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paris, Jesse L.; Kamke, Frederick A.; Xiao, Xianghui

    Microscale X-ray computed tomography (XCT) is discussed as a technique for identifying 3D adhesive distribution in wood-adhesive bondlines. Visualization and material segmentation of the adhesives from the surrounding cellular structures require sufficient gray-scale contrast in the reconstructed XCT data. Commercial wood-adhesive polymers have similar chemical characteristics and density to wood cell wall polymers and therefore do not provide good XCT attenuation contrast in their native form. Here, three different adhesive types, namely phenol formaldehyde, polymeric diphenylmethane diisocyanate, and a hybrid polyvinyl acetate, are tagged with iodine such that they yield sufficient X-ray attenuation contrast. However, phase-contrast effects at material edgesmore » complicate image quality and segmentation in XCT data reconstructed with conventional filtered backprojection absorption contrast algorithms. A quantitative phase retrieval algorithm, which isolates and removes the phase-contrast effect, was demonstrated. The paper discusses and illustrates the balance between material X-ray attenuation and phase-contrast effects in all quantitative XCT analyses of wood-adhesive bondlines.« less

  17. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  18. New algorithm for detecting smaller retinal blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  19. Efficient Scalable Median Filtering Using Histogram-Based Operations.

    PubMed

    Green, Oded

    2018-05-01

    Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.

  20. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  1. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  2. Solving Assembly Sequence Planning using Angle Modulated Simulated Kalman Filter

    NASA Astrophysics Data System (ADS)

    Mustapa, Ainizar; Yusof, Zulkifli Md.; Adam, Asrul; Muhammad, Badaruddin; Ibrahim, Zuwairie

    2018-03-01

    This paper presents an implementation of Simulated Kalman Filter (SKF) algorithm for optimizing an Assembly Sequence Planning (ASP) problem. The SKF search strategy contains three simple steps; predict-measure-estimate. The main objective of the ASP is to determine the sequence of component installation to shorten assembly time or save assembly costs. Initially, permutation sequence is generated to represent each agent. Each agent is then subjected to a precedence matrix constraint to produce feasible assembly sequence. Next, the Angle Modulated SKF (AMSKF) is proposed for solving ASP problem. The main idea of the angle modulated approach in solving combinatorial optimization problem is to use a function, g(x), to create a continuous signal. The performance of the proposed AMSKF is compared against previous works in solving ASP by applying BGSA, BPSO, and MSPSO. Using a case study of ASP, the results show that AMSKF outperformed all the algorithms in obtaining the best solution.

  3. A new double digestion ligation mediated suppression PCR method for simultaneous bacteria DNA-typing and confirmation of species: an Acinetobacter sp. model.

    PubMed

    Stojowska, Karolina; Krawczyk, Beata

    2014-01-01

    We have designed a new ddLMS PCR (double digestion Ligation Mediated Suppression PCR) method based on restriction site polymorphism upstream from the specific target sequence for the simultaneous identification and differentiation of bacterial strains. The ddLMS PCR combines a simple PCR used for species or genus identification and the LM PCR strategy for strain differentiation. The bacterial identification is confirmed in the form of the PCR product(s), while the length of the PCR product makes it possible to differentiate between bacterial strains. If there is a single copy of the target sequence within genomic DNA, one specific PCR product is created (simplex ddLMS PCR), whereas for multiple copies of the gene the fingerprinting patterns can be obtained (multiplex ddLMS PCR). The described ddLMS PCR method is designed for rapid and specific strain differentiation in medical and microbiological studies. In comparison to other LM PCR it has substantial advantages: enables specific species' DNA-typing without the need for pure bacterial culture selection, is not sensitive to contamination with other cells or genomic DNA, and gives univocal "band-based" results, which are easy to interpret. The utility of ddLMS PCR was shown for Acinetobacter calcoaceticus-baumannii (Acb) complex, the genetically closely related and phenotypically similar species and also important nosocomial pathogens, for which currently, there are no recommended methods for screening, typing and identification. In this article two models are proposed: 3' recA-ddLMS PCR-MaeII/RsaI for Acb complex interspecific typing and 5' rrn-ddLMS PCR-HindIII/ApaI for Acinetobacter baumannii intraspecific typing. ddLMS PCR allows not only for DNA-typing but also for confirmation of species in one reaction. Also, practical guidelines for designing a diagnostic test based on ddLMS PCR for genotyping different species of bacteria are provided.

  4. Facilitating job retention for chronically ill employees: perspectives of line managers and human resource managers

    PubMed Central

    2011-01-01

    Background Chronic diseases are a leading contributor to work disability and job loss in Europe. Recent EU policies aim to improve job retention among chronically ill employees. Disability and occupational health researchers argue that this requires a coordinated and pro-active approach at the workplace by occupational health professionals, line managers (LMs) and human resource managers (HRM). Little is known about the perspectives of LMs an HRM on what is needed to facilitate job retention among chronically ill employees. The aim of this qualitative study was to explore and compare the perspectives of Dutch LMs and HRM on this issue. Methods Concept mapping methodology was used to elicit and map statements (ideas) from 10 LMs and 17 HRM about what is needed to ensure continued employment for chronically ill employees. Study participants were recruited through a higher education and an occupational health services organization. Results Participants generated 35 statements. Each group (LMs and HRM) sorted these statements into six thematic clusters. LMs and HRM identified four similar clusters: LMs and HRM must be knowledgeable about the impact of chronic disease on the employee; employees must accept responsibility for work retention; work adaptations must be implemented; and clear company policy. Thematic clusters identified only by LMs were: good manager/employee cooperation and knowledge transfer within the company. Unique clusters identified by HRM were: company culture and organizational support. Conclusions There were both similarities and differences between the views of LMs and HRM on what may facilitate job retention for chronically ill employees. LMs perceived manager/employee cooperation as the most important mechanism for enabling continued employment for these employees. HRM perceived organizational policy and culture as the most important mechanism. The findings provide information about topics that occupational health researchers and planners should address in developing job retention programs for chronically ill workers. PMID:21586139

  5. Student use of a Learning Management System for group projects: A case study investigating interaction, collaboration, and knowledge construction

    NASA Astrophysics Data System (ADS)

    Lonn, Steven D.

    Web-based Learning Management Systems (LMS) allow instructors and students to share instructional materials, make class announcements, submit and return course assignments, and communicate with each other online. Previous LMS-related research has focused on how these systems deliver and manage instructional content with little concern for how students' constructivist learning can be encouraged and facilitated. This study investigated how students use LMS to interact, collaborate, and construct knowledge within the context of a group project but without mediation by the instructor. The setting for this case study was students' use in one upper-level biology course of the local LMS within the context of a course-related group project, a mock National Institutes of Health grant proposal. Twenty-one groups (82 students) voluntarily elected to use the LMS, representing two-thirds of all students in the course. Students' peer-to-peer messages within the LMS, event logs, online surveys, focus group interviews, and instructor interviews were used in order to answer the study's overarching research question. The results indicate that students successfully used the LMS to interact and, to a significant extent, collaborate, but there was very little evidence of knowledge construction using the LMS technology. It is possible that the ease and availability of face-to-face meetings as well as problems and limitations with the technology were factors that influenced whether students' online basic interaction could be further distinguished as collaboration or knowledge construction. Despite these limitations, students found several tools and functions of the LMS useful for their online peer interaction and completion of their course project. Additionally, LMS designers and implementers are urged to consider previous literature on computer-supported collaborative learning environments in order to better facilitate independent group projects within these systems. Further research is needed to identify the best types of scaffolds and overall technological improvements in order to provide support for online collaboration and knowledge construction.

  6. Facilitating job retention for chronically ill employees: perspectives of line managers and human resource managers.

    PubMed

    Haafkens, Joke A; Kopnina, Helen; Meerman, Martha G M; van Dijk, Frank J H

    2011-05-17

    Chronic diseases are a leading contributor to work disability and job loss in Europe. Recent EU policies aim to improve job retention among chronically ill employees. Disability and occupational health researchers argue that this requires a coordinated and pro-active approach at the workplace by occupational health professionals, line managers (LMs) and human resource managers (HRM). Little is known about the perspectives of LMs an HRM on what is needed to facilitate job retention among chronically ill employees. The aim of this qualitative study was to explore and compare the perspectives of Dutch LMs and HRM on this issue. Concept mapping methodology was used to elicit and map statements (ideas) from 10 LMs and 17 HRM about what is needed to ensure continued employment for chronically ill employees. Study participants were recruited through a higher education and an occupational health services organization. Participants generated 35 statements. Each group (LMs and HRM) sorted these statements into six thematic clusters. LMs and HRM identified four similar clusters: LMs and HRM must be knowledgeable about the impact of chronic disease on the employee; employees must accept responsibility for work retention; work adaptations must be implemented; and clear company policy. Thematic clusters identified only by LMs were: good manager/employee cooperation and knowledge transfer within the company. Unique clusters identified by HRM were: company culture and organizational support. There were both similarities and differences between the views of LMs and HRM on what may facilitate job retention for chronically ill employees. LMs perceived manager/employee cooperation as the most important mechanism for enabling continued employment for these employees. HRM perceived organizational policy and culture as the most important mechanism. The findings provide information about topics that occupational health researchers and planners should address in developing job retention programs for chronically ill workers.

  7. A New Double Digestion Ligation Mediated Suppression PCR Method for Simultaneous Bacteria DNA-Typing and Confirmation of Species: An Acinetobacter sp. Model

    PubMed Central

    Stojowska, Karolina; Krawczyk, Beata

    2014-01-01

    We have designed a new ddLMS PCR (double digestion Ligation Mediated Suppression PCR) method based on restriction site polymorphism upstream from the specific target sequence for the simultaneous identification and differentiation of bacterial strains. The ddLMS PCR combines a simple PCR used for species or genus identification and the LM PCR strategy for strain differentiation. The bacterial identification is confirmed in the form of the PCR product(s), while the length of the PCR product makes it possible to differentiate between bacterial strains. If there is a single copy of the target sequence within genomic DNA, one specific PCR product is created (simplex ddLMS PCR), whereas for multiple copies of the gene the fingerprinting patterns can be obtained (multiplex ddLMS PCR). The described ddLMS PCR method is designed for rapid and specific strain differentiation in medical and microbiological studies. In comparison to other LM PCR it has substantial advantages: enables specific species' DNA-typing without the need for pure bacterial culture selection, is not sensitive to contamination with other cells or genomic DNA, and gives univocal “band-based” results, which are easy to interpret. The utility of ddLMS PCR was shown for Acinetobacter calcoaceticus-baumannii (Acb) complex, the genetically closely related and phenotypically similar species and also important nosocomial pathogens, for which currently, there are no recommended methods for screening, typing and identification. In this article two models are proposed: 3′ recA-ddLMS PCR-MaeII/RsaI for Acb complex interspecific typing and 5′ rrn-ddLMS PCR-HindIII/ApaI for Acinetobacter baumannii intraspecific typing. ddLMS PCR allows not only for DNA-typing but also for confirmation of species in one reaction. Also, practical guidelines for designing a diagnostic test based on ddLMS PCR for genotyping different species of bacteria are provided. PMID:25522278

  8. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  9. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  10. An Attitude Filtering and Magnetometer Calibration Approach for Nanosatellites

    NASA Astrophysics Data System (ADS)

    Söken, Halil Ersin

    2018-04-01

    We propose an attitude filtering and magnetometer calibration approach for nanosatellites. Measurements from magnetometers, Sun sensor and gyros are used in the filtering algorithm to estimate the attitude of the satellite together with the bias terms for the gyros and magnetometers. In the traditional approach for the attitude filtering, the attitude sensor measurements are used in the filter with a nonlinear vector measurement model. In the proposed algorithm, the TRIAD algorithm is used in conjunction with the unscented Kalman filter (UKF) to form the nontraditional attitude filter. First the vector measurements from the magnetometer and Sun sensor are processed with the TRIAD algorithm to obtain a coarse attitude estimate for the spacecraft. In the second phase the estimated coarse attitude is used as quaternion measurements for the UKF. The UKF estimates the fine attitude, and the gyro and magnetometer biases. We evaluate the algorithm for a hypothetical nanosatellite by numerical simulations. The results show that the attitude of the satellite can be estimated with an accuracy better than 0.5{°} and the computational load decreases more than 25% compared to a traditional UKF algorithm. We discuss the algorithm's performance in case of a time-variance in the magnetometer errors.

  11. NOTE: A BPF-type algorithm for CT with a curved PI detector

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-01

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  12. A BPF-type algorithm for CT with a curved PI detector.

    PubMed

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-21

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  13. Investigating the Use of the Intel Xeon Phi for Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Sherman, Keegan; Gilfoyle, Gerard

    2014-09-01

    The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.

  14. Relationships between severity of chronic rhinosinusitis and nasal polyposis, asthma, and atopy

    PubMed Central

    Pearlman, Aaron N.; Chandra, Rakesh K.; Chang, Dennis; Conley, David B.; Peters, Anju Tripathi; Grammer, Leslie C.; Schleimer, Robert T.; Kern, Robert C.

    2013-01-01

    Background The effect of comorbid conditions such as asthma and atopy on the severity of chronic rhinosinusitis (CRS) and the presence of nasal polyps (NPs) remains an area of investigation. We sought to elucidate the relationship among these entities. Methods The study population included 106 consecutive patients who were referred to a multidisciplinary, university-based allergy and sinus clinic that underwent computed tomography (CT) scan, skin-prick testing, and had CRS. Data were analyzed to determine Lund-MacKay score (LMS), presence of NPs, asthma status, and sensitivity to seven classes of aeroallergens. Results Skin tests were positive in 52 cases and negative in 54 cases. Although, there was no statistical relationship between LMS and atopic status in the entire group, among the asthmatic subgroup, mean LMS was greater in nona topic asthmatic patients than in atopic asthmatic patients. Asthmatic patients had a higher LMS than nonasthmatic patients (p < 0.0001). Asthmatic patients were more likely than nonasthmatic patients to have NPs (57.6% versus 25%; p = 0.0015), regardless of atopic status. Mean LMS was higher in NP patients compared with nonpolyp patients (p < 0.0001), independent of atopic status. Mean LMS was not affected by sensitivity to any particular allergen, with the exception of cockroach-allergic patients who were more likely to have an LMS of >10 (p = 0.0236) and had more severe maxillary sinus involvement (p = 0.0391). Conclusion These data indicate a strong relationship between CRS severity, as measured by LMS, and chronic airway inflammatory diseases, asthma, and NPs. The association between LMS and atopic status appears weak. The present study suggests that CRS is an inflammatory disease that occurs independently of systemic IgE-mediated pathways. PMID:19401038

  15. Embedding a Learning Management System Into an Undergraduate Medical Informatics Course in Saudi Arabia: Lessons Learned

    PubMed Central

    2013-01-01

    Background Public universities in Saudi Arabia today are making substantial investments in e-learning as part of their educational system, especially in the implementation of learning management systems (LMS). To our knowledge, this is the first study conducted in Saudi Arabia exploring medical students’ experience with an LMS, particularly as part of a medical informatics course. Objective This study investigates students’ use of various features of the LMS embedded in a recently implemented medical informatics course. Methods A mixed methodology approach was employed. Survey questionnaires were distributed to all third year medical informatics students at the end of the course. In addition, two focus group sessions were conducted with twelve students. A thematic analysis of the focus group was performed. Results A total of 265 third year medical student surveys (167/265, 63% male and 98/265, 37% female) were completed and analyzed. Overall, 50.6% (134/265) of the students agreed that the course was well planned and up-to-date, had clearly stated objectives and clear evaluation methods, appropriate course assignment, and that the LMS offered easy navigation. Most of the students rated the course as good/fair overall. In general, females were 10.4% more likely to prefer the LMS, as revealed by higher odd ratios (odds ratio [OR] 1.104, 95% CI 0.86-1.42) compared to males. Survey results showed that students’ use of LMS tools increased after taking the course compared to before taking the course. The full model containing all items were statistically significant (χ2 25=69.52, P<.001, n=243), indicating that the model was able to distinguish between students who had positive attitudes towards LMS and those who did not. The focus group, however, revealed that the students used social networking for general use rather than learning purposes, but they were using other Internet resources and mobile devices for learning. Male students showed a higher preference for using technology in general to enhance learning activities. Overall, medical student attitudes towards the LMS were generally positive. Students also wanted a reminder and notification tool to help them stay updated with course events. Interestingly, a subset of students had been running a parallel LMS of their own that has features worth exploring and could be integrated with an official LMS in the future. Conclusions To our knowledge, this was the first time that an LMS was used in a medical informatics course. Students showed interest in adapting various LMS tools to enhance their learning and gained more knowledge through familiarity with the tool. Researching an official LMS also revealed the existence of a parallel student-created LMS. This could allow teacher-led and student-led platforms to be integrated in the future for an enhanced student-centered experience. PMID:25075236

  16. Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2011-10-01

    This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.

  17. Transmission of broad W/Rh and W/Al (target/filter) x-ray beams operated at 25-49 kVp through common shielding materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Xinhua; Zhang Da; Liu, Bob

    2012-07-15

    Purpose: To provide transmission data for broad 25-39 kVp (kilovolt peak) W/Rh and 25-49 kVp W/Al (target/filter, W-tungsten, Rh-rhodium, and Al-aluminum) x-ray beams through common shielding materials, such as lead, concrete, gypsum wallboard, wood, steel, and plate glass. Methods: The unfiltered W-target x-ray spectra measured on a Selenia Dimensions system (Hologic Inc., Bedford, MA) set at 20-49 kVp were, respectively, filtered using 50-{mu}m Rh and 700-{mu}m Al, and were subsequently used for Monte Carlo calculations. The transmission of broad x-ray beams through shielding materials was simulated using Geant4 low energy electromagnetic physics package with photon- and electron-processes above 250 eV,more » including photoelectric effect, Compton scattering, and Rayleigh scattering. The calculated transmission data were fitted using Archer equation with a robust fitting algorithm. Results: The transmission of broad x-ray beams through the above-mentioned shielding materials was calculated down to about 10{sup -5} for 25-39 kVp W/Rh and 25-49 kVp W/Al. The fitted results of {alpha}, {beta}, and {gamma} in Archer equation were provided. The {alpha} values of kVp Greater-Than-Or-Slanted-Equal-To 40 were approximately consistent with those of NCRP Report No. 147. Conclusions: These data provide inputs for the shielding designs of x-ray imaging facilities with W-anode x-ray beams, such as from Selenia Dimensions.« less

  18. Primary leiomyosarcoma in the colon

    PubMed Central

    Yang, Jing

    2018-01-01

    Abstract Rationale: Leiomyosarcoma (LMS) is a common type of soft tissue sarcoma. Primary colonic LMS in general is a very rare entity, accounting for 1% to 2% of gastrointestinal malignancies. Patient concerns: We report a case of 55-year-old female who presented with a sudden onset of sharp right lower quadrant abdominal pain. Electronic colonoscopy showed a normal lumen. However, an abdominal computed tomography scan revealed a mass of soft tissue attenuation inseparable from the ascending colon which appeared as a gastrointestinal stromal tumor (GIST). Diagnoses: It is important to diagnose LMS definitively by immunohistochemical profiling of smooth muscle actin, desmin, and CD34. Interventions: She underwent laparotomy and right hemicolectomy, and histology confirmed a colonic LMS. The patient received no oncological treatment after surgery. Outcomes: No recurrence or metastasis was observed at 5 months postoperatively. It is crucial to identify colonic LMS precisely based on immunohistochemistry, and thereby distinguish it from GIST. Lessons: Further investigation on LMS cases so far is required to establish standard treatment strategies. PMID:29443772

  19. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  20. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  1. Using the LMS method to calculate z-scores for the Fenton preterm infant growth chart.

    PubMed

    Fenton, T R; Sauve, R S

    2007-12-01

    The use of exact percentiles and z-scores permit optimal assessment of infants' growth. In addition, z-scores allow the precise description of size outside of the 3rd and 97th percentiles of a growth reference. To calculate percentiles and z-scores, health professionals require the LMS parameters (Lambda for the skew, Mu for the median, and Sigma for the generalized coefficient of variation; Cole, 1990). The objective of this study was to calculate the LMS parameters for the Fenton preterm growth chart (2003). Secondary data analysis of the Fenton preterm growth chart data. The Cole methods were used to produce the LMS parameters and to smooth the L parameter. New percentiles were generated from the smooth LMS parameters, which were then compared with the original growth chart percentiles. The maximum differences between the original percentile curves and the percentile curves generated from the LMS parameters were: for weight; a difference of 66 g (2.9%) at 32 weeks along the 90th percentile; for head circumference; some differences of 0.3 cm (0.6-1.0%); and for length; a difference of 0.5 cm (1.6%) at 22 weeks on the 97th percentile. The percentile curves generated from the smoothed LMS parameters for the Fenton growth chart are similar to the original curves. These LMS parameters for the Fenton preterm growth chart facilitate the calculation of z-scores, which will permit the more precise assessment of growth of infants who are born preterm.

  2. The research of radar target tracking observed information linear filter method

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  3. LHCb Kalman Filter cross architecture studies

    NASA Astrophysics Data System (ADS)

    Cámpora Pérez, Daniel Hugo

    2017-10-01

    The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance. The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time characteristics and early execution in the selection chain, consumes 40% of the whole reconstruction time in the current trigger software. This makes the Kalman Filter a time-critical component as the LHCb trigger evolves into a full software trigger in the Upgrade. I present a new Kalman Filter algorithm for LHCb that can efficiently make use of any kind of SIMD processor, and its design is explained in depth. Performance benchmarks are compared between a variety of hardware architectures, including x86_64 and Power8, and the Intel Xeon Phi accelerator, and the suitability of said architectures to efficiently perform the LHCb Reconstruction process is determined.

  4. Driver drowsiness detection using multimodal sensor fusion

    NASA Astrophysics Data System (ADS)

    Andreeva, Elena O.; Aarabi, Parham; Philiastides, Marios G.; Mohajer, Keyvan; Emami, Majid

    2004-04-01

    This paper proposes a multi-modal sensor fusion algorithm for the estimation of driver drowsiness. Driver sleepiness is believed to be responsible for more than 30% of passenger car accidents and for 4% of all accident fatalities. In commercial vehicles, drowsiness is blamed for 58% of single truck accidents and 31% of commercial truck driver fatalities. This work proposes an innovative automatic sleep-onset detection system. Using multiple sensors, the driver"s body is studied as a mechanical structure of springs and dampeners. The sleep-detection system consists of highly sensitive triple-axial accelerometers to monitor the driver"s upper body in 3-D. The subject is modeled as a linear time-variant (LTV) system. An LMS adaptive filter estimation algorithm generates the transfer function (i.e. weight coefficients) for this LTV system. Separate coefficients are generated for the awake and asleep states of the subject. These coefficients are then used to train a neural network. Once trained, the neural network classifies the condition of the driver as either awake or asleep. The system has been tested on a total of 8 subjects. The tests were conducted on sleep-deprived individuals for the sleep state and on fully awake individuals for the awake state. When trained and tested on the same subject, the system detected sleep and awake states of the driver with a success rate of 95%. When the system was trained on three subjects and then retested on a fourth "unseen" subject, the classification rate dropped to 90%. Furthermore, it was attempted to correlate driver posture and sleepiness by observing how car vibrations propagate through a person"s body. Eight additional subjects were studied for this purpose. The results obtained in this experiment proved inconclusive which was attributed to significant differences in the individual habitual postures.

  5. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843

  6. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-06-13

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).

  7. Flat panel detector-based cone beam computed tomography with a circle-plus-two-arcs data acquisition orbit: preliminary phantom study.

    PubMed

    Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng

    2003-07-01

    Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.

  8. HerMES: point source catalogues from Herschel-SPIRE observations II

    NASA Astrophysics Data System (ADS)

    Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.

    2014-11-01

    The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).

  9. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  10. LMS Transitioning to "Moodle": A Surprising Case of Successful, Emergent Change Management

    ERIC Educational Resources Information Center

    Lawler, Alan

    2011-01-01

    During 2009-10 the University of Ballarat implemented the open-source learning management system (LMS) "Moodle" alongside its existing legacy LMS, "Blackboard". While previous IT implementations have been troublesome at the university, notably the student information and finance management systems in 2008-09, the…

  11. EFL Learners' Perceptions of Using LMS

    ERIC Educational Resources Information Center

    Srichanyachon, Napaporn

    2014-01-01

    The purpose of this study is to present the views, attitudes, and perspectives of undergraduate students using Learning Management System (LMS) along with traditional face-to-face learning. It attempts to understand the factors that influence the adoption of LMS based on users' own experience. The samples were 198 undergraduate students enrolled…

  12. Adaptivity in ProPer: An Adaptive SCORM Compliant LMS

    ERIC Educational Resources Information Center

    Kazanidis, Ioannis; Satratzemi, Maya

    2009-01-01

    Adaptive Educational Hypermedia Systems provide personalized educational content to learners. However most of them do not support the functionality of Learning Management Systems (LMS) and the reusability of their courses is hard work. On the other hand some LMS support SCORM specifications but do not provide adaptive features. This article…

  13. 47 CFR 90.355 - LMS operations below 512 MHz.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PRIVATE LAND MOBILE RADIO SERVICES Intelligent Transportation Systems Radio Service § 90.355 LMS... LMS station and the nearest co-channel base station of another licensee operating a voice system is 75... MHz, 150-170 MHz, and 450-512 MHz bands may use either base-mobile frequencies currently assigned the...

  14. Key Factors to Instructors' Satisfaction of Learning Management Systems in Blended Learning

    ERIC Educational Resources Information Center

    Al-Busaidi, Kamla Ali; Al-Shihi, Hafedh

    2012-01-01

    Learning Management System (LMS) enables institutions to administer their educational resources, and support their traditional classroom education and distance education. LMS survives through instructors' continuous use, which may be to a great extent associated with their satisfaction of the LMS. Consequently, this study examined the key factors…

  15. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  16. ZnS/diamond composite coatings for infrared transmission applications formed by the aerosol deposition method

    NASA Astrophysics Data System (ADS)

    Johnson, Scooter D.; Kub, Fritz J.; Eddy, Charles R.

    2013-06-01

    The deposition of nano-crystalline ZnS/diamond composite protective coatings on silicon, sapphire, and ZnS substrates, as a preliminary step to coating infrared transparent ZnS substrates from powder mixtures by the aerosol deposition method is presented. Advantages of the aerosol deposition method include the ability to form dense, nanocrystalline lms up to hundreds of microns thick at room temperature and at a high deposition rate on a variety of substrates. Deposition is achieved by creating a pressure gradient that accelerates micrometer- scale particles in an aerosol to high velocity. Upon impact with the target substrate the particles fracture and embed. Continued deposition forms the thick compacted lm. Deposition from an aerosolized mixture of ZnS and diamond powders onto all targets results in linear trend from apparent sputter erosion of the substrate at 100% diamond to formation of a lm with increasing fractions of ZnS. The crossover from abrasion to lm formation on sapphire occurs above about 50% ZnS and a mixture of 90% ZnS and 10% diamond forms a well-adhered lm of about 0.7 μm thickness at a rate of 0.14 μm/min. Resulting lms are characterized by scanning electron microscopy, pro lometry, infrared transmission spectroscopy, and x-ray photoemission spectroscopy. These initial lms mark progress toward the future goal of coating ZnS substrates for abrasion resistance.

  17. The optimal digital filters of sine and cosine transforms for geophysical transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo

    2018-03-01

    The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.

  18. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  19. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan

    Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks andmore » fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.« less

  1. Event processing in X-IFU detector onboard Athena.

    NASA Astrophysics Data System (ADS)

    Ceballos, M. T.; Cobos, B.; van der Kuurs, J.; Fraga-Encinas, R.

    2015-05-01

    The X-ray Observatory ATHENA was proposed in April 2014 as the mission to implement the science theme "The Hot and Energetic Universe" selected by ESA for L2 (the second Large-class mission in ESA's Cosmic Vision science programme). One of the two X-ray detectors designed to be onboard ATHENA is X-IFU, a cryogenic microcalorimeter based on Transition Edge Sensor (TES) technology that will provide spatially resolved high-resolution spectroscopy. X-IFU will be developed by a consortium of European research institutions currently from France (leadership), Italy, The Netherlands, Belgium, UK, Germany and Spain. From Spain, IFCA (CSIC-UC) is involved in the Digital Readout Electronics (DRE) unit of the X-IFU detector, in particular in the Event Processor Subsytem. We at IFCA are in charge of the development and implementation in the DRE unit of the Event Processing algorithms, designed to recognize, from a noisy signal, the intensity pulses generated by the absorption of the X-ray photons, and lately extract their main parameters (coordinates, energy, arrival time, grade, etc.) Here we will present the design and performance of the algorithms developed for the event recognition (adjusted derivative), and pulse grading/qualification as well as the progress in the algorithms designed to extract the energy content of the pulses (pulse optimal filtering). IFCA will finally have the responsibility of the implementation on board in the (TBD) FPGAs or micro-processors of the DRE unit, where this Event Processing part will take place, to fit into the limited telemetry of the instrument.

  2. getimages: Background derivation and image flattening method

    NASA Astrophysics Data System (ADS)

    Men'shchikov, Alexander

    2017-05-01

    getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.

  3. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  4. A comparative analysis of signal processing methods for motion-based rate responsive pacing.

    PubMed

    Greenhut, S E; Shreve, E A; Lau, C P

    1996-08-01

    Pacemakers that augment heart rate (HR) by sensing body motion have been the most frequently prescribed rate responsive pacemakers. Many comparisons between motion-based rate responsive pacemaker models have been published. However, conclusions regarding specific signal processing methods used for rate response (e.g., filters and algorithms) can be affected by device-specific features. To objectively compare commonly used motion sensing filters and algorithms, acceleration and ECG signals were recorded from 16 normal subjects performing exercise and daily living activities. Acceleration signals were filtered (1-4 or 15-Hz band-pass), then processed using threshold crossing (TC) or integration (IN) algorithms creating four filter/algorithm combinations. Data were converted to an acceleration indicated rate and compared to intrinsic HR using root mean square difference (RMSd) and signed RMSd. Overall, the filters and algorithms performed similarly for most activities. The only differences between filters were for walking at an increasing grade (1-4 Hz superior to 15-Hz) and for rocking in a chair (15-Hz superior to 1-4 Hz). The only differences between algorithms were for bicycling (TC superior to IN), walking at an increasing grade (IN superior to TC), and holding a drill (IN superior to TC). Performance of the four filter/algorithm combinations was also similar over most activities. The 1-4/IN (filter [Hz]/algorithm) combination performed best for walking at a grade, while the 15/TC combination was best for bicycling. However, the 15/TC combination tended to be most sensitive to higher frequency artifact, such as automobile driving, downstairs walking, and hand drilling. Chair rocking artifact was highest for 1-4/IN. The RMSd for bicycling and upstairs walking were large for all combinations, reflecting the nonphysiological nature of the sensor. The 1-4/TC combination demonstrated the least intersubject variability, was the only filter/algorithm combination insensitive to changes in footwear, and gave similar RMSd over a large range of amplitude thresholds for most activities. In conclusion, based on overall error performance, the preferred filter/algorithm combination depended upon the type of activity.

  5. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Wang, W; Tang, X

    2014-06-15

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less

  6. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  7. EFL Instructors' Perceptions of Blackboard Learning Management System (LMS) at University Level

    ERIC Educational Resources Information Center

    Tawalbeh, Thaer Issa

    2018-01-01

    The present paper aims to investigate EFL instructors' perceptions of Blackboard learning management system (LMS) at Taif University in Saudi Arabia. To achieve this purposes, the researcher attempted to answer two questions. The first question investigates EFL instructors' perceptions of Blackboard LMS. The second question aims to identify…

  8. LTSA Conformance Testing to Architectural Design of LMS Using Ontology

    ERIC Educational Resources Information Center

    Sengupta, Souvik; Dasgupta, Ranjan

    2017-01-01

    This paper proposes a new methodology for checking conformance of the software architectural design of Learning Management System (LMS) to Learning Technology System Architecture (LTSA). In our approach, the architectural designing of LMS follows the formal modeling style of Acme. An ontology is built to represent the LTSA rules and the software…

  9. Envisioning the Post-LMS Era: The Open Learning Network

    ERIC Educational Resources Information Center

    Mott, Jonathan

    2010-01-01

    Learning management systems (LMSs) have dominated the teaching and learning landscape in higher education for the past decade, with a recent Delta Initiative report indicating that more than 90 percent of colleges and universities have a standardized, institutional LMS implementation. While the LMS has become central to the business of colleges…

  10. LMS Lessons

    ERIC Educational Resources Information Center

    Freifeld, Lorri

    2010-01-01

    With technology changing every second of every day, it is no surprise a learning management system (LMS) quickly can become outdated. But it is no easy task to re-engineer a current LMS or find exactly the right new one to purchase. In this article, three 2010 Top Young Trainers share their experiences with implementing or re-engineering an…

  11. Charles Brady in Life and Microgravity Spacelab (LMS) Onboard STS-78

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Launched on June 20, 1996, the STS-78 mission's primary payload was the Life and Microgravity Spacelab (LMS), which was managed by the Marshall Space Flight Center (MSFC). During the 17 day space flight, the crew conducted a diverse slate of experiments divided into a mix of life science and microgravity investigations. In a manner very similar to future International Space Station operations, LMS researchers from the United States and their European counterparts shared resources such as crew time and equipment. Five space agencies (NASA/USA, European Space Agency/Europe (ESA), French Space Agency/France, Canadian Space Agency /Canada, and Italian Space Agency/Italy) along with research scientists from 10 countries worked together on the design, development and construction of the LMS. In this onboard photograph, mission specialist Charles Brady is working in the LMS.

  12. Around Marshall

    NASA Image and Video Library

    1996-06-20

    Launched on June 20, 1996, the STS-78 mission’s primary payload was the Life and Microgravity Spacelab (LMS), which was managed by the Marshall Space Flight Center (MSFC). During the 17 day space flight, the crew conducted a diverse slate of experiments divided into a mix of life science and microgravity investigations. In a manner very similar to future International Space Station operations, LMS researchers from the United States and their European counterparts shared resources such as crew time and equipment. Five space agencies (NASA/USA, European Space Agency/Europe (ESA), French Space Agency/France, Canadian Space Agency /Canada, and Italian Space Agency/Italy) along with research scientists from 10 countries worked together on the design, development and construction of the LMS. In this photo, LMS mission scientist Patton Downey and LMS mission manager Mark Boudreaux display the flag that was flown for the mission at MSFC.

  13. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  14. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  15. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-03-31

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.

  16. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347

  17. Superaligned carbon nanotube arrays, films, and yarns: a road to applications.

    PubMed

    Jiang, Kaili; Wang, Jiaping; Li, Qunqing; Liu, Liang; Li, Changhong; Fan, Shoushan

    2011-03-04

    A superaligned carbon nanotube (CNT) array is a special kind of vertically aligned CNT array with the capability of being converted into continuous fi lms and yarns. The as-produced CNT fi lms are transparent and highly conductive, with aligned CNTs parallel to the direction of drawing. After passing through volatile solutions or being twisted, CNT fi lms can be further condensed into shrunk yarns. These shrunk yarns possess high tensile strengths and Young’s moduli, and are good conductors. Many applications of CNT fi lms and shrunk yarns have been demonstrated, such as TEM grids, loudspeakers, touch screens, etc.

  18. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  19. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  20. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  1. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  2. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  3. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  4. On-line estimation and detection of abnormal substrate concentrations in WWTPs using a software sensor: a benchmark study.

    PubMed

    Benazzi, F; Gernaey, K V; Jeppsson, U; Katebi, R

    2007-08-01

    In this paper, a new approach for on-line monitoring and detection of abnormal readily biodegradable substrate (S(s)) and slowly biodegradable substrate (X(s)) concentrations, for example due to input of toxic loads from the sewer, or due to influent substrate shock load, is proposed. Considering that measurements of S(s) and X(s) concentrations are not available in real wastewater treatment plants, the S(s) / X(s) software sensor can activate an alarm with a response time of about 60 and 90 minutes, respectively, based on the dissolved oxygen measurement. The software sensor implementation is based on an extended Kalman filter observer and disturbances are modelled using fast Fourier transform and spectrum analyses. Three case studies are described. The first one illustrates the fast and accurate convergence of the extended Kalman filter algorithm, which is achieved in less than 2 hours. Furthermore, the difficulties of estimating X(s) when off-line analysis is not available are depicted, and the S(s) / X(s) software sensor performances when no measurements of S(s) and X(s) are available are illustrated. Estimation problems related to the death-regeneration concept of the activated sludge model no.1 and possible application of the software sensor in wastewater monitoring are discussed.

  5. Knowledge Enriched Learning by Converging Knowledge Object & Learning Object

    ERIC Educational Resources Information Center

    Sabitha, Sai; Mehrotra, Deepti; Bansal, Abhay

    2015-01-01

    The most important dimension of learning is the content, and a Learning Management System (LMS) suffices this to a certain extent. The present day LMS are designed to primarily address issues like ease of use, search, content and performance. Many surveys had been conducted to identify the essential features required for the improvement of LMS,…

  6. 47 CFR 90.357 - Frequencies for LMS systems in the 902-928 MHz band.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Frequencies for LMS systems in the 902-928 MHz band. 90.357 Section 90.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Intelligent Transportation Systems Radio Service § 90.357 Frequencies for LMS systems in...

  7. Migrating Learning Management Systems: A Case of a Large Public University

    ERIC Educational Resources Information Center

    Such, Brenda L. R.; Ritzhaupt, Albert D.; Thompson, George S.

    2017-01-01

    In the past 20 years, institutions of higher education have made major investments in Learning Management Systems (LMSs). As institutions have integrated the LMS into campus culture, the potential of migrating to not only an upgraded version of the LMS, but also an entirely different LMS, has become a reality. This qualitative research study…

  8. Student Satisfaction with Learning Management Systems: A Lens of Critical Success Factors

    ERIC Educational Resources Information Center

    Naveh, Gali; Tubin, Dorit; Pliskin, Nava

    2012-01-01

    Institutions of higher education have invested heavily in learning management systems (LMS) for creating course websites. Yet, how to assess LMS effectiveness is not fully agreed upon. Based on institutional theory, this article considers student satisfaction as indicative of LMS success and proposes a lens of critical success factors (CSF) as a…

  9. The Role of Involvement in Learning Management System Success

    ERIC Educational Resources Information Center

    Klobas, Jane E.; McGill, Tanya J.

    2010-01-01

    Learning management systems (LMS) have been adopted by the majority of higher education institutions and research that explores the factors that influence the success of LMS is needed. This paper investigates the roles of student and instructor involvement in LMS success, using the DeLone and McLean (2003) model of information systems success as a…

  10. 47 CFR 90.359 - Field strength limits for EA-licensed LMS systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Field strength limits for EA-licensed LMS... § 90.359 Field strength limits for EA-licensed LMS systems. EA-licensed multilateration systems shall limit the field strength of signals transmitted from their base stations to 47 dBuV/m at their EA...

  11. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  12. Optimization of internet content filtering-Combined with KNN and OCAT algorithms

    NASA Astrophysics Data System (ADS)

    Guo, Tianze; Wu, Lingjing; Liu, Jiaming

    2018-04-01

    The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.

  13. Investigation of micrometre-sized fossil by laser mass spectrometer (LMS) designed for in situ space research

    NASA Astrophysics Data System (ADS)

    Tulej, Marek; Neubeck, Anna; Ivarsson, Magnus; Brigitte Neuland, Maike; Riedo, Andreas; Wurz, Peter

    2015-04-01

    Search for signatures of life on other planets is one of the most important goals of current planetary missions. Among various possible biomarkers, which can be investigated in situ on planetary surfaces, the detection of bio-relevant elements in planetary materials is of considerable interest and the abundance of isotopes can be important signatures of past and present bioactivities [1, 2]. We investigate the chemical composition of fossilised biological inclusions embedded in a carbonate host phase by a miniature laser ablation mass spectrometer (LMS) [3]. The LMS instrument combines a laser ablation ion source for ablation, atomisation and ionisation of surface material with a reflectron time-of-flight (TOF) mass spectrometer. LMS delivers mass spectra of almost all elements and their isotopes. In the current setup a fs-laser ablation ion source is applied with high lateral (15 um) and vertical (sub-um) resolution [4, 7] and the mass analyser supports mass resolution of 400-500 (at 56Fe mass peak) and dynamic range of eight orders of magnitude [5, 6]. From the 200 mass spectra recorded at 200 different locations on the carbonate sample surface, five mass spectra were identified which recorded the chemical composition of inclusions; from the other mass spectra the composition of the carbonate host matrix could be determined. The microscopic inspection of the sample surface and correlation with the coordinates of the laser ablation measurements made the confirmation to the location of the inclusion [8]. For the carbonate host matrix, the mass spectrometric analysis yielded the major elements H, C, O, Na, Mg, K and Ca and the trace elements Li, B and Cl. The measurements at the inclusion locations yielded in addition, the detection of F, Si, P, S, Mn, Fe, Ni, Co and Se. For most of the major elements the isotope ratios were found to be conform to the terrestrial values within a few per mills, while for minor and trace elements the determination of isotope ratios were less accurate due to low signal to noise ratios (SNR). The isotope abundances for the lightest isotope of B, S were observed to be larger than terrestrial, which is consistent with isotope fractionation by bio-relevant processes and a salty ocean. The studies demonstrates the current performance of the miniature LMS for in situ investigation of highly heterogeneous samples and its capabilities for the identification of fossilised biological matter. References: [1] Summons et al., Astrobiology, 11, 157, 2011. [2] Wurz et al., Sol. Sys. Res. 46 408, 2012. [3] Rohner et al.,Meas. Sci. Technol., 14, 2159, 2003. [4] Riedo et al., J. Anal. Atom. Spectrom. 28, 1256, 2013. [5] Riedo et al., J. Mass Spectrom.48, 1, 2013. [6] Neuland et al., Planet. Space. Sci. 101, 196, 2014. [7] Grimaudo et al., Anal. Chem. 2014, submitted. [8] Tulej et al. Geostand. Geoanal. Res., 2014; DOI: 10.1111/j.1751-908X.2014.00302.x

  14. Differentiation of Uterine Leiomyosarcoma from Atypical Leiomyoma: Diagnostic Accuracy of Qualitative MR Imaging Features and Feasibility of Texture Analysis.

    PubMed

    Lakhman, Yulia; Veeraraghavan, Harini; Chaim, Joshua; Feier, Diana; Goldman, Debra A; Moskowitz, Chaya S; Nougaret, Stephanie; Sosa, Ramon E; Vargas, Hebert Alberto; Soslow, Robert A; Abu-Rustum, Nadeem R; Hricak, Hedvig; Sala, Evis

    2017-07-01

    To investigate whether qualitative magnetic resonance (MR) features can distinguish leiomyosarcoma (LMS) from atypical leiomyoma (ALM) and assess the feasibility of texture analysis (TA). This retrospective study included 41 women (ALM = 22, LMS = 19) imaged with MRI prior to surgery. Two readers (R1, R2) evaluated each lesion for qualitative MR features. Associations between MR features and LMS were evaluated with Fisher's exact test. Accuracy measures were calculated for the four most significant features. TA was performed for 24 patients (ALM = 14, LMS = 10) with uniform imaging following lesion segmentation on axial T2-weighted images. Texture features were pre-selected using Wilcoxon signed-rank test with Bonferroni correction and analyzed with unsupervised clustering to separate LMS from ALM. Four qualitative MR features most strongly associated with LMS were nodular borders, haemorrhage, "T2 dark" area(s), and central unenhanced area(s) (p ≤ 0.0001 each feature/reader). The highest sensitivity [1.00 (95%CI:0.82-1.00)/0.95 (95%CI: 0.74-1.00)] and specificity [0.95 (95%CI:0.77-1.00)/1.00 (95%CI:0.85-1.00)] were achieved for R1/R2, respectively, when a lesion had ≥3 of these four features. Sixteen texture features differed significantly between LMS and ALM (p-values: <0.001-0.036). Unsupervised clustering achieved accuracy of 0.75 (sensitivity: 0.70; specificity: 0.79). Combination of ≥3 qualitative MR features accurately distinguished LMS from ALM. TA was feasible. • Four qualitative MR features demonstrated the strongest statistical association with LMS. • Combination of ≥3 these features could accurately differentiate LMS from ALM. • Texture analysis was a feasible semi-automated approach for lesion categorization.

  15. Handover aspects for a Low Earth Orbit (LEO) CDMA Land Mobile Satellite (LMS) system

    NASA Technical Reports Server (NTRS)

    Carter, P.; Beach, M. A.

    1993-01-01

    This paper addresses the problem of handoff in a land mobile satellite (LMS) system between adjacent satellites in a low earth orbit (LEO) constellation. In particular, emphasis is placed on the application of soft handoff in a direct sequence code division multiple access (DS-CDMA) LMS system. Soft handoff is explained in terms of terrestrial macroscopic diversity, in which signals transmitted via several independent fading paths are combined to enhance the link quality. This concept is then reconsidered in the context of a LEO LMS system. A two-state Markov channel model is used to simulate the effects of shadowing on the communications path from the mobile to each satellite during handoff. The results of the channel simulation form a platform for discussion regarding soft handoff, highlighting the potential merits of the scheme when applied in a LEO LMS environment.

  16. Medaka Fish Embryo Developed for STS-78 Life and Microgravity Spacelab (LMS)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Launched on June 20, 1996, the STS-78 mission's primary payload was the Life and Microgravity Spacelab (LMS), which was managed by the Marshall Space Flight Center (MSFC). During the 17 day space flight, the crew conducted a diverse slate of experiments divided into a mix of life science and microgravity investigations. In a manner very similar to future International Space Station operations, LMS researchers from the United States and their European counterparts shared resources such as crew time and equipment. Five space agencies (NASA/USA, European Space Agency/Europe (ESA), French Space Agency/France, Canadian Space Agency /Canada, and Italian Space Agency/Italy) along with research scientists from 10 countries worked together on the design, development and construction of the LMS. This photo represents the development of Medaka Fish Embryos, one of the many studies of the LMS mission.

  17. Wiener Chaos and Nonlinear Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lototsky, S.V.

    2006-11-15

    The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less

  18. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  19. Terrain Aided Navigation for Remus Autonomous Underwater Vehicle

    DTIC Science & Technology

    2014-06-01

    22  Figure 11.  Several successive sonar pings displayed together in the LTP frame .............23  Figure 12.  The linear interpolation of...the sonar pings from Figure 11 .............................24  Figure 13.  SIR particle filter algorithm, after [19... ping —  |p k ky x .........46  Figure 26.  Correlation probability distributions for four different sonar images ..............47  Figure 27.  Particle

  20. Source geometric considerations for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Patterson, J. R.; Widmann, K.

    2012-10-15

    The Dante is a 15 channel filtered diode array which is installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The system yields the spectrally and temporally resolved radiation flux from 50 eV to 10 keV from various targets (i.e., Hohlraum, gas pipes, etc.). The absolute flux is determined from the radiometric calibration of the x-ray diodes, filters, and mirrors and an unfold algorithm applied to the recorded voltages from each channel. The unfold algorithm assumes an emitting source that is spatially uniform and has a constant area as a function of photon energy.more » The emitting x-ray source is usually considered to be the laser entrance hole (LEH) of a given diameter for Hohlraum type targets or the effective wall area of high conversion efficiency K-shell type targets. This assumption can be problematic for several reasons. High intensity regions or 'hot spots' in the x-ray are observed where the drive laser beams strike the target. The 'hot spots' create non-uniform emission seen by the Dante. Additionally, thinned walled (50 {mu}m) low-Z targets (C{sub 22}H{sub 10}N{sub 2}O{sub 5}) have an energy dependent source size since the target's walls will be fully opaque for low energies (E < 2-3 keV) yet fully transmissive at higher energies. Determining accurate yields can be challenging for these types of targets. Discussion and some analysis will be presented.« less

  1. Source geometric considerations for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Patterson, J. R.; Sorce, C.

    2012-10-01

    The Dante is a 15 channel filtered diode array which is installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The system yields the spectrally and temporally resolved radiation flux from 50 eV to 10 keV from various targets (i.e., Hohlraum, gas pipes, etc.). The absolute flux is determined from the radiometric calibration of the x-ray diodes, filters, and mirrors and an unfold algorithm applied to the recorded voltages from each channel. The unfold algorithm assumes an emitting source that is spatially uniform and has a constant area as a function of photon energy.more » The emitting x-ray source is usually considered to be the laser entrance hole (LEH) of a given diameter for Hohlraum type targets or the effective wall area of high conversion efficiency K-shell type targets. This assumption can be problematic for several reasons. High intensity regions or “hot spots” in the x-ray are observed where the drive laser beams strike the target. The “hot spots” create non-uniform emission seen by the Dante. Additionally, thinned walled (50 μm) low-Z targets (C22H10N2O5) have an energy dependent source size since the target's walls will be fully opaque for low energies (E < 2–3 keV) yet fully transmissive at higher energies. Determining accurate yields can be challenging for these types of targets. Discussion and some analysis will be presented.« less

  2. Transitioning to the Learning Management System Moodle from Blackboard: Impacts to Faculty

    ERIC Educational Resources Information Center

    Varnell, Page

    2016-01-01

    What are the workload impacts to faculty during a Learning Management System (LMS) transition? What type of support is needed by faculty during an LMS transition? Transitioning to a new LMS may result in faculty problems with learning a new technology platform in addition to teaching. The purpose of this phenomenological study was to explore the…

  3. Tryout and Evaluation of Prototype LMS (Learning Mastery System) Training System Under Exclusive Use Agreement.

    ERIC Educational Resources Information Center

    McDonald, Cheryl; Hylton, John A.

    In 1970-1971 Learning Mastery System (LMS) materials were made available to schools within the state of California under an Exclusive Use Agreement. The LMS is a set of materials and procedures prepared by the Southwest Regional Laboratory (SWRL) as an objectives-based framework to assist in managing the learning activities of existing reading…

  4. Who Needs to Do What Where?: Using Learning Management Systems on Residential vs. Commuter Campuses

    ERIC Educational Resources Information Center

    Lonn, Steven; Teasley, Stephanie D.; Krumm, Andrew E.

    2011-01-01

    Learning Management Systems (LMS) are web-based systems allowing instructors and/or students to share materials and interact online. This study compared differences in LMS use between instructors and students at a large residential campus with students at a smaller commuter campus. Responses to an online survey about LMS activities and tools were…

  5. Student Perceptions of Social Presence and Attitudes toward Social Media: Results of a Cross-Sectional Study

    ERIC Educational Resources Information Center

    Leafman, Joan S.; Mathieson, Kathleen M.; Ewing, Helen

    2013-01-01

    Establishing and maintaining social presence in an online environment that depends on a learning management system (LMS) can be challenging. While students believe social presence to be important, LMS platforms have yet to discover a way to deliver this expectation. The growth of social media tools presents opportunities outside an LMS to foster…

  6. Understanding Faculty Use of the Learning Management System

    ERIC Educational Resources Information Center

    Rhode, Jason; Richter, Stephanie; Gowen, Peter; Miller, Tracy; Wills, Cameron

    2017-01-01

    The learning management system (LMS) has become a critical tool for nearly all institutions of higher education, and a driving force in online learning. According to a 2014 report by the Educause Center for Analysis and Research, 99% of higher education institutions have an LMS in place, and the LMS is used by 85% of faculty and 83% of students.…

  7. A global method for identifying dependences between helio-geophysical and biological series by filtering the precedents (outliers)

    NASA Astrophysics Data System (ADS)

    Ozheredov, V. A.; Breus, T. K.; Gurfinkel, Yu. I.; Matveeva, T. A.

    2014-12-01

    A new approach to finding the dependence between heliophysical and meteorological factors and physiological parameters is considered that is based on the preliminary filtering of precedents (outliers). The sought-after dependence is masked by extraneous influences which cannot be taken into account. Therefore, the typically calculated correlation between the external-influence ( x) and physiology ( y) parameters is extremely low and does not allow their interdependence to be conclusively proved. A robust method for removing the precedents (outliers) from the database is proposed that is based on the intelligent sorting of the polynomial curves of possible dependences y( x), followed by filtering out the precedents which are far away from y( x) and optimizing the coefficient of nonlinear correlation between the regular, i.e., remaining, precedents. This optimization problem is shown to be a search for a maximum in the absence of the concept of gradient and requires the use of a genetic algorithm based on the Gray code. The relationships between the various medical and biological parameters and characteristics of the space and terrestrial weather are obtained and verified using the cross-validation method. It is proven that, by filtering out no more than 20% of precedents, it is possible to obtain a nonlinear correlation coefficient of no less than 0.5. A juxtaposition of the proposed method for filtering precedents (outliers) and the least-square method (LSM) for determining the optimal polynomial using multiple independent tests (Monte Carlo method) of models, which are as close as possible to real dependences, has shown that the LSM determination loses much in comparison to the proposed method.

  8. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  9. Automated Handling of Garments for Pressing

    DTIC Science & Technology

    1991-09-30

    Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering

  10. World wide matching of registration metrology tools of various generations

    NASA Astrophysics Data System (ADS)

    Laske, F.; Pudnos, A.; Mackey, L.; Tran, P.; Higuchi, M.; Enkrich, C.; Roeth, K.-D.; Schmidt, K.-H.; Adam, D.; Bender, J.

    2008-10-01

    Turn around time/cycle time is a key success criterion in the semiconductor photomask business. Therefore, global mask suppliers typically allocate work loads based on fab capability and utilization capacity. From a logistical point of view, the manufacturing location of a photomask should be transparent to the customer (mask user). Matching capability of production equipment and especially metrology tools is considered a key enabler to guarantee cross site manufacturing flexibility. Toppan, with manufacturing sites in eight countries worldwide, has an on-going program to match the registration metrology systems of all its production sites. This allows for manufacturing flexibility and risk mitigation.In cooperation with Vistec Semiconductor Systems, Toppan has recently completed a program to match the Vistec LMS IPRO systems at all production sites worldwide. Vistec has developed a new software feature which allows for significantly improved matching of LMS IPRO(x) registration metrology tools of various generations. We will report on the results of the global matching campaign of several of the leading Toppan sites.

  11. A new coherent demodulation technique for land-mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Yoshida, Shousei; Tomita, Hideho

    1990-01-01

    An advanced coherent demodulation technique is described for land mobile satellite (LMS) communications. The proposed technique features a combined narrow/wind band dual open loop carrier phase estimator, which is effectively able to compensate the fast carrier phase fluctuation by fading with sacrificing a phase slip rate. Also included is the realization of quick carrier and clock reacquisition after shadowing by taking open loop structure. Its bit error rate (BER) performance is superior to that of existing detection schemes, showing a BER of 1 x 10(exp -2) at 6.3 dB E sub b/N sub o over the Rician channel with 10 dB C/M and 200 Hz (1/16 modulation rate) fading pitch f sub d for QPSK. The proposed scheme consists of a fast response carrier recovery and a quick bit timing recovery with an interpolation. An experimental terminal model was developed to evaluate its performance at fading conditions. The results are quite satisfactory, giving prospects for future LMS applications.

  12. Understanding reconstructed Dante spectra using high resolution spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J., E-mail: may13@llnl.gov; Widmann, K.; Kemp, G. E.

    2016-11-15

    The Dante is an 18 channel filtered diode array used at the National Ignition Facility (NIF) to measure the spectrally and temporally resolved radiation flux between 50 eV and 20 keV from various targets. The absolute flux is determined from the radiometric calibration of the x-ray diodes, filters, and mirrors and a reconstruction algorithm applied to the recorded voltages from each channel. The reconstructed spectra are very low resolution with features consistent with the instrument response and are not necessarily consistent with the spectral emission features from the plasma. Errors may exist between the reconstructed spectra and the actual emissionmore » features due to assumptions in the algorithm. Recently, a high resolution convex crystal spectrometer, VIRGIL, has been installed at NIF with the same line of sight as the Dante. Spectra from L-shell Ag and Xe have been recorded by both VIRGIL and Dante. Comparisons of these two spectroscopic measurements yield insights into the accuracy of the Dante reconstructions.« less

  13. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    PubMed

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  14. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  15. Gravity anomalies and lithospheric flexure around the Longmen Shan deduced from combinations of in situ observations and EGM2008 data

    NASA Astrophysics Data System (ADS)

    She, Yawen; Fu, Guangyu; Wang, Zhuohua; Liu, Tai; Xu, Changyi; Jin, Honglin

    2016-10-01

    The current work describes the combined data of three field campaigns, spanning 2009-2013. Their joint gravity and GPS observations thoroughly cover the sites of lithospheric flexure between the Sichuan Basin and the Eastern Tibetan Plateau. The study area's free-air gravity anomalies (FGAs) are updated by using a remove-and-restore algorithm which merges EGM2008 data with in situ observations. These new FGAs show pairs of positive and negative anomalies along the eastern edges of the Tibetan Plateau. The FGAs are used to calculate effective elastic thickness ( T e) and load ratios ( F) of the lithosphere. Admittance analysis indicates the T e of Longmen Shan (LMS) to be 6 km, and profile analysis indicates that the T e of the Sichuan Basin excesses 30 km. The load ratio ( F 1 = 1) confirms that the lithospheric flexure of the LMS area can be attributed solely to the surface load of the crust. [Figure not available: see fulltext. Caption: The current work describes the combined data of three field campaigns, spanning 2009-2013. Their joint gravity and GPS observations thoroughly cover the sites of lithospheric flexure between the Sichuan Basin and the Eastern Tibetan Plateau. The study area's free-air gravity anomalies (FGAs) are updated by using a remove-and-restore algorithm which merges EGM2008 data with in situ observations. With the new FGAs data, the lithospheric strength of the study area is studied by the authors, and they also give a combined model to illustrate the uplift mechanism of this area.

  16. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  17. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  18. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  19. The Other Side of the LMS: Considering Implementation and Use in the Adoption of an LMS in Online and Blended Learning Environments

    ERIC Educational Resources Information Center

    Black, Erik W.; Beck, Dennis; Dawson, Kara; Jinks, Susan; DiPietro, Meredith

    2007-01-01

    There are more similarities than differences among learning management system (LMS) software products. In a recent study, Carriere, Challborn, and Moore compared a variety of LMSs and went as far as to suggest that the only real differences between systems lie in marketing approaches. In addition, because LMSs have interchangeable parts, those…

  20. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  1. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    NASA Astrophysics Data System (ADS)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  2. Improved collaborative filtering recommendation algorithm of similarity measure

    NASA Astrophysics Data System (ADS)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  3. Theatre Ballistic Missile Defense-Multisensor Fusion, Targeting and Tracking Techniques

    DTIC Science & Technology

    1998-03-01

    Washington, D.C., 1994. 8. Brown , R., and Hwang , P., Introduction to Random Signals and Applied Kaiman Filtering, Third Edition, John Wiley and Sons...C. ADDING MEASUREMENT NOISE 15 III. EXTENDED KALMAN FILTER 19 A. DISCRETE TIME KALMAN FILTER 19 B. EXTENDED KALMAN FILTER 21 C. EKF IN TARGET...tracking algorithms. 17 18 in. EXTENDED KALMAN FILTER This chapter provides background information on the development of a tracking algorithm

  4. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  5. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  6. Recent Advances in Liquid Metal Manipulation toward Soft Robotics and Biotechnologies.

    PubMed

    Yu, Yue; Miyako, Eijiro

    2018-04-06

    Interest has grown significantly in the field of soft robotics, which seeks to develop machinery capable of duplicating the elastic and rheological properties of typically polymeric or elastomeric biological tissues and organs. As a result of a number of unique properties, gallium-based liquid metals (LMs) are emerging as materials used in the forefront of soft robotics research. Finding methods to enable the sophisticated manipulation of LMs will be essential for further progress in the field. This review provides a critical discussion of the manipulation of LMs and on important biotechnological applications of LMs including microfluidics, healthcare devices, biomaterials, and nanomedicines. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multitarget mixture reduction algorithm with incorporated target existence recursions

    NASA Astrophysics Data System (ADS)

    Ristic, Branko; Arulampalam, Sanjeev

    2000-07-01

    The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.

  8. Investigating IT Faculty Resistance to Learning Management System Adoption Using Latent Variables in an Acceptance Technology Model.

    PubMed

    Bousbahi, Fatiha; Alrazgan, Muna Saleh

    2015-01-01

    To enhance instruction in higher education, many universities in the Middle East have chosen to introduce learning management systems (LMS) to their institutions. However, this new educational technology is not being used at its full potential and faces resistance from faculty members. To investigate this phenomenon, we conducted an empirical research study to uncover factors influencing faculty members' acceptance of LMS. Thus, in the Fall semester of 2014, Information Technology faculty members were surveyed to better understand their perceptions of the incorporation of LMS into their courses. The results showed that personal factors such as motivation, load anxiety, and organizational support play important roles in the perception of the usefulness of LMS among IT faculty members. These findings suggest adding these constructs in order to extend the Technology acceptance model (TAM) for LMS acceptance, which can help stakeholders of the university to implement the use of this system. This may assist in planning and evaluating the use of e-learning.

  9. Investigating IT Faculty Resistance to Learning Management System Adoption Using Latent Variables in an Acceptance Technology Model

    PubMed Central

    Bousbahi, Fatiha; Alrazgan, Muna Saleh

    2015-01-01

    To enhance instruction in higher education, many universities in the Middle East have chosen to introduce learning management systems (LMS) to their institutions. However, this new educational technology is not being used at its full potential and faces resistance from faculty members. To investigate this phenomenon, we conducted an empirical research study to uncover factors influencing faculty members' acceptance of LMS. Thus, in the Fall semester of 2014, Information Technology faculty members were surveyed to better understand their perceptions of the incorporation of LMS into their courses. The results showed that personal factors such as motivation, load anxiety, and organizational support play important roles in the perception of the usefulness of LMS among IT faculty members. These findings suggest adding these constructs in order to extend the Technology acceptance model (TAM) for LMS acceptance, which can help stakeholders of the university to implement the use of this system. This may assist in planning and evaluating the use of e-learning. PMID:26491712

  10. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  11. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  12. Proceedings of the Conference on Moments and Signal

    NASA Astrophysics Data System (ADS)

    Purdue, P.; Solomon, H.

    1992-09-01

    The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.

  13. The Efficiency of the "Learning Management System (LMS)" in AOU, Kuwait, as a Communication Tool in an E-Learning System

    ERIC Educational Resources Information Center

    Alfadly, Ahmad Assaf

    2013-01-01

    Purpose: The integration of a Learning Management System (LMS) at the Arab Open University (AOU), Kuwait, opens new possibilities for online interaction between teachers and students. The purpose of this paper is to evaluate the efficiency of the LMS at AOU, Kuwait as a communication tool in the E-learning system and to find the best automated…

  14. EMMPRIN (CD147) Expression in Smooth Muscle Tumors of the Uterus.

    PubMed

    Kefeli, Mehmet; Yildiz, Levent; Gun, Seda; Ozen, Fatma Z; Karagoz, Filiz

    2016-01-01

    Smooth muscle tumors of the uterus are the most common mesenchymal tumors of the gynecologic tract. The vast majority of these are benign leiomyomas that present no diagnostic difficulty. Because some benign smooth muscle tumors may degenerate and uncommon variants exist, the diagnosis can be challenging in some cases. The goal of this research was to investigate EMMPRIN expression in leiomyomas, leiomyoma variants, and leiomyosarcomas (LMS) to determine whether it has a potential role in differential diagnosis. EMMPRIN expression was investigated with immunohistochemistry in 103 uterine smooth muscle tumors, which included 19 usual leiomyomas, 52 leiomyoma variants, and 32 LMS. They were evaluated on the basis of staining extent, intensity, and also their combined score, and the groups were compared. EMMPRIN expression was present in 3 of 19 (15.7%) usual leiomyomas, 23 of 52 (44.3%) leiomyoma variants, and 28 of 32 (87.5%) LMS. There were statistically significant differences in staining extent and intensity, and also for their combined scores, between the LMS and benign groups. Although uterine smooth muscle tumors are usually diagnosed easily with conventional diagnostic criteria, the differentiation of LMS from some variants of leiomyoma can be challenging based soley on morphology. EMMPRIN may be a valuable immunohistochemical marker for differentiating LMS from benign smooth muscle tumors in problematic cases.

  15. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    DTIC Science & Technology

    2017-05-01

    contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human

  16. Image defog algorithm based on open close filter and gradient domain recursive bilateral filter

    NASA Astrophysics Data System (ADS)

    Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen

    2017-11-01

    To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.

  17. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  18. The use of herbs by california midwives.

    PubMed

    Dennehy, Cathi; Tsourounis, Candy; Bui, Lindsey; King, Tekoa L

    2010-01-01

    To characterize herbal product use (prevalence, types, indications) among Certified Nurse Midwives/Certified Midwives (CNMs/CMs) and Licensed Midwives (LMs) practicing in the state of California and to describe formal education related to herbal products received by midwives during midwifery education. Cross-sectional survey/California/Practicing midwives. A list of LMs and CNMs/CMs practicing in California was obtained through the California Medical Board (CMB) and the American College of Nurse Midwives (ACNM), respectively. The survey was mailed to 343 CNMs/CMs (one third of the ACNM mailing list) and 157 LMs (the complete CMB mailing list). Of the 500 surveys mailed, 40 were undeliverable, 146 were returned, and 7 were excluded (30% response rate). Of the 139 completed surveys, 58/102 (57%) of CNMs/CMs and 35/37 (95%) of LMs used herbs, and LMs were more comfortable than CNMs/CMs in recommending herbs to their patients. A majority of LMs had >20 hours of midwifery education on herbs whereas a majority of CNMs/CMs received 0 to 5 hours. Some CNMs/CMs indicated that their practice site limited their ability to use herbs. Common conditions in which LMs and CNMs/CMs used herbs were nausea/vomiting (86% vs. 83%), labor induction (89% vs. 58%), and lactation (86% vs. 65%). Specific herbs for all indications are described. Licensed midwives were more likely than CNMs/CMs to use herbs in clinical practice. This trend was likely a reflection of the amount of education devoted to herbs as well as herbal use limitations that may be encountered in institutional facilities. © 2010 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.

  19. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    DTIC Science & Technology

    2016-06-01

    UNCLASSIFIED Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Peter W. Sarunic 1 1...determine instantaneous estimates of receiver position and then goes on to develop three Kalman filter based estimators, which use stationary receiver...used in actual GPS receivers, and cover a wide range of applications. While the standard form of the Kalman filter , of which the three filters just

  20. Laser Doppler velocimeter system simulation for sensing aircraft wake vortices. Part 2: Processing and analysis of LDV data (for runs 1023 and 2023)

    NASA Technical Reports Server (NTRS)

    Meng, J. C. S.; Thomson, J. A. L.

    1975-01-01

    A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.

  1. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    DOE PAGES

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; ...

    2017-08-08

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less

  2. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less

  3. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; Cerati, Giuseppe; Gray, Lindsey; Kowalkowski, Jim; Mudigonda, Mayur; Prabhat; Spentzouris, Panagiotis; Spiropoulou, Maria; Tsaris, Aristeidis; Vlimant, Jean-Roch; Zheng, Stephan

    2017-08-01

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.

  4. Robotic fish tracking method based on suboptimal interval Kalman filter

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  5. MR image reconstruction via guided filter.

    PubMed

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  6. Automatic voice recognition using traditional and artificial neural network approaches

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1989-01-01

    The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.

  7. Mill Designed Bio bleaching Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Institute of Paper Science Technology

    2004-01-30

    A key finding of this research program was that Laccase Mediator Systems (LMS) treatments on high-kappa kraft could be successfully accomplished providing substantial delignification (i.e., > 50%) without detrimental impact on viscosity and significantly improved yield properties. The efficiency of the LMS was evident since most of the lignin from the pulp was removed in less than one hour at 45 degrees C. Of the mediators investigated, violuric acid was the most effective vis-a-vis delignification. A comparative study between oxygen delignification and violuric acid revealed that under relatively mild conditions, a single or a double LMS{sub VA} treatment is comparablemore » to a single or a double O stage. Of great notability was the retention of end viscosity of LMS{sub VA} treated pulps with respect to the end viscosity of oxygen treated pulps. These pulps could then be bleached to full brightness values employing conventional ECF bleaching technologies and the final pulp physical properties were equal and/or better than those bleached in a conventional ECF manner employing an aggressively O or OO stage initially. Spectral analyses of residual lignins isolated after LMS treated high-kappa kraft pulps revealed that similar to HBT, VA and NHA preferentially attack phenolic lignin moieties. In addition, a substantial decrease in aliphatic hydroxyl groups was also noted, suggesting side chain oxidation. In all cases, an increase in carboxylic acid was observed. Of notable importance was the different selectivity of NHA, VA and HBT towards lignin functional groups, despite the common N-OH moiety. C-5 condensed phenolic lignin groups were overall resistant to an LMS{sub NHA, HBT} treatments but to a lesser extent to an LMS{sub VA}. The inactiveness of these condensed lignin moieties was not observed when low-kappa kraft pulps were biobleached, suggesting that the LMS chemistry is influenced by the extent of delignification. We have also demonstrated that the current generation of laccase has a broad spectrum of operating parameters. Nonetheless, the development of future genetically engineered laccases with enhanced temperature, pH and redox potentials will dramatically improve the overall process. A second challenge for LMS bleaching technologies is the need to develop effective, catalytic mediators. From the literature we already know this is feasible since ABTS and some inorganic mediators are catalytic. Unfortunately, the mediators that exhibit catalytic properties do not exhibit significant delignification properties and this is a challenge for future research studies. Potential short-term mill application of laccase has been recently reported by Felby132 and Chandra133 as they have demonstrated that the physical properties of linerboard can be improved when exposed to laccase without a chemical mediator. In addition, xxx has shown that the addition of laccase to the whitewater of the paper machine has several benefits for the removal of colloidal materials. Finally, this research program has presented important features on the delignification chemistry of LMS{sub NHA} and LMS{sub VA} that, in the opinion of the author, are momentous contributions to the overall LMS chemistry/biochemistry knowledge base which will continue to have future benefits.« less

  8. SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Floros, D

    Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less

  9. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  10. Acute total left main stem occlusion treated with emergency percutaneous coronary intervention

    PubMed Central

    Mozid, A M; Sritharan, K; Clesham, G J

    2010-01-01

    Acute total occlusion of the left main stem (LMS) is a rare cause of myocardial infarction but carries a high risk of morbidity and mortality including presentation as sudden death. We describe the case of a 68-year-old woman who presented acutely with chest pain and ST segment elevation in lead aVR on her ECG suggestive of possible LMS occlusion. Emergency coronary angiography confirmed acute total LMS occlusion as well as an anomalous dominant right coronary artery. The patient underwent emergency percutaneous coronary intervention of the LMS with a good angiographic result and resolution of her symptoms. The patient was treated for acute left ventricular failure but made a gradual recovery and was discharged home 7 days after admission.

  11. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  12. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  13. Angular-Rate Estimation Using Delayed Quaternion Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.

  14. Per-point and per-field contextual classification of multipolarization and multiple incidence angle aircraft L-band radar data

    NASA Technical Reports Server (NTRS)

    Hoffer, Roger M.; Hussin, Yousif Ali

    1989-01-01

    Multipolarized aircraft L-band radar data are classified using two different image classification algorithms: (1) a per-point classifier, and (2) a contextual, or per-field, classifier. Due to the distinct variations in radar backscatter as a function of incidence angle, the data are stratified into three incidence-angle groupings, and training and test data are defined for each stratum. A low-pass digital mean filter with varied window size (i.e., 3x3, 5x5, and 7x7 pixels) is applied to the data prior to the classification. A predominately forested area in northern Florida was the study site. The results obtained by using these image classifiers are then presented and discussed.

  15. Research on the method of information system risk state estimation based on clustering particle filter

    NASA Astrophysics Data System (ADS)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  16. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  17. Hydraulic Properties of Fractured Rock Samples at In-Situ Conditions - Insights from Lab Experiments Using X-Ray Tomography

    NASA Astrophysics Data System (ADS)

    Nehler, Mathias; Stöckhert, Ferdinand; Duda, Mandy; Renner, Jörg; Bracke, Rolf

    2017-04-01

    The hydraulic properties of low-porosity rock formations are controlled by the geometry of open fractures, joints and faults. Aperture, surface roughness, accessible length, and thus, the volume available for fluids associated of such interfaces are strongly affected by their state of stress. Moreover, these properties may evolve with time in particular due to processes involving chemically active fluids. Understanding the physico-chemical interactions of rocks with fluids at reservoir conditions will help to predict the long-term reservoir development and to increase the efficiency of geothermal power plants. We designed an x-ray transparent flow-through cell. Confining pressure can be up to 50 MPa and pore fluid can currently be circulated through the sample with pressures of up to 25 MPa. All wetted parts are made of PEEK to avoid corrosion when using highly saline fluids. Laboratory experiments were performed to investigate hydraulic properties of fractured low-porosity samples under reservoir conditions while x-rays transmit the sample. The cell is placed inside a µCT scanner with a 225 kV multifocal x-ray tube for high resolution x-ray tomography. Samples measure 10 mm in diameter and 25 mm in length resulting in a voxel resolution of approximately 10 µm. Samples with single natural as well as artificial fractures were subjected to various confining pressures ranging from 2.5 MPa to 25 MPa. At each pressure level, effective permeability was determined from steady-state flow relying on Darcy's law. In addition, a full 3D image was recorded by the µCT scanner to gain information on the fracture aperture and geometry. Subvolumes (400x400x400 voxels) of the images were analyzed to reduce computational cost. The subvolumes were filtered in 3D with an edge preserving non-local means filter. Further quantification algorithms were implemented in Matlab. Segmentation into pore space and minerals was done automatically for all datasets by a peak finder algorithm. For all samples, the threshold value was set as a fixed value between the two determined main peaks. A fracture is separated from pores using a connectivity filter. The overall porosity and the fracture volume are calculated. The mean aperture is used to calculate the in-situ fracture permeability according to the cubic law. First results indicate a strong dependency of the calculated permeability on pressure, especially for partly closed fractures, that is associated with an increasing contact area of the fracture.

  18. Paravertebral foramen screw fixation for posterior cervical spine fusion: biomechanical study and description of a novel technique.

    PubMed

    Maki, Satoshi; Aramomi, Masaaki; Matsuura, Yusuke; Furuya, Takeo; Ota, Mitsutoshi; Iijima, Yasushi; Saito, Junya; Suzuki, Takane; Mannoji, Chikato; Takahashi, Kazuhisa; Yamazaki, Masashi; Koda, Masao

    2017-10-01

    OBJECTIVE Fusion surgery with instrumentation is a widely accepted treatment for cervical spine pathologies. The authors propose a novel technique for subaxial cervical fusion surgery using paravertebral foramen screws (PVFS). The authors consider that PVFS have equal or greater biomechanical strength than lateral mass screws (LMS). The authors' goals of this study were to conduct a biomechanical study of PVFS, to investigate the suitability of PVFS as salvage fixation for failed LMS, and to describe this novel technique. METHODS The authors harvested 24 human cervical spine vertebrae (C3-6) from 6 fresh-frozen cadaver specimens from donors whose mean age was 84.3 ± 10.4 years at death. For each vertebra, one side was chosen randomly for PVFS and the other for LMS. For PVFS, a 3.2-mm drill with a stopper was advanced under lateral fluoroscopic imaging. The drill stopper was set to 12 mm, which was considered sufficiently short not to breach the transverse foramen. The drill was directed from 20° to 25° medially so that the screw could purchase the relatively hard cancellous bone around the entry zone of the pedicle. The hole was tapped and a 4.5-mm-diameter × 12-mm screw was inserted. For LMS, 3.5-mm-diameter × 14-mm screws were inserted into the lateral mass of C3-6. The pullout strength of each screw was measured. After pullout testing of LMS, a drill was inserted into the screw hole and the superior cortex of the lateral mass was pried to cause a fracture through the screw hole, simulating intraoperative fracture of the lateral mass. After the procedure, PVFS for salvage (sPVFS) were inserted on the same side and pullout strength was measured. RESULTS The CT scans obtained after screw insertion revealed no sign of pedicle breaching, violation of the transverse foramen, or fracture of the lateral mass. A total of 69 screws were tested (23 PVFS, 23 LMS, and 23 sPVFS). One vertebra was not used because of a fracture that occurred while the specimen was prepared. The mean bone mineral density of the specimens was 0.29 ± 0.10 g/cm 3 . The mean pullout strength was 234 ± 114 N for PVFS, 158 ± 91 N for LMS, and 195 ± 125 N for sPVFS. The pullout strength for PVFS tended to be greater than that for LMS. However, the difference was not quite significant (p = 0.06). CONCLUSIONS The authors introduce a novel fixation technique for the subaxial cervical spine. This study suggests that PVFS tend to provide stronger fixation than LMS for initial applications and fixation equal to LMS for salvage applications. If placement of LMS fails, PVFS can serve as a salvage fixation technique.

  19. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  20. A Bayesian Approach to Period Searching in Solar Coronal Loops

    NASA Astrophysics Data System (ADS)

    Scherrer, Bryan; McKenzie, David

    2017-03-01

    We have applied a Bayesian generalized Lomb-Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Program #129C: “MHD Wave Heating (Thin Filters),” which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2″ per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1″ per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.

  1. A Bayesian Approach to Period Searching in Solar Coronal Loops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Bryan; McKenzie, David

    2017-03-01

    We have applied a Bayesian generalized Lomb–Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Programmore » 129C: “MHD Wave Heating (Thin Filters),” which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2″ per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1″ per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.« less

  2. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  3. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  4. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  5. Late Cenozoic thrusting of major faults along the central segment of Longmen Shan, eastern Tibet: Evidence from low-temperature thermochronology

    NASA Astrophysics Data System (ADS)

    Tan, Xi-Bin; Xu, Xi-Wei; Lee, Yuan-Hsi; Lu, Ren-Qi; Liu, Yiduo; Xu, Chong; Li, Kang; Yu, Gui-Hua; Kang, Wen-Jun

    2017-08-01

    The Cenozoic orogenic process of the Longmen Shan (LMS) and the kinematics of major faults along the LMS are crucial for understanding the growth history and mechanism of the eastern Tibetan Plateau. Three major faults, from west to east, are present in the central segment of the LMS: the Wenchuan-Maoxian Fault (WMF), the Beichuan-Yingxiu Fault (BYF), and the Jiangyou-Guanxian Fault (JGF). Previous researchers have placed great impetus on the Pengguan Massif, between the WMF and BYF. However, limited low-temperature thermochronology data coverage in other areas prevents us from fully delineating the tectonic history of the LMS. In this study, we collect 22 samples from vertical profiles in the Xuelongbao Massif and the range frontal area located at the hanging walls of the WMF and JGF respectively, and conduct apatite and zircon fission track analyses. New fission track data reveal that the Xuelongbao Massif has been undergoing rapid exhumation with an average rate of 0.7-0.9 mm/yr since 11 Ma, and the range frontal area began rapid exhumation at 7.5 Ma with total exhumation of 2.5-4.5 km. The exhumation histories indicate that the three major faults (WMF, BYF and JGF) in the central LMS are all reverse faults, and show a basinward in-sequence propagation from middle Miocene to present-day. Such a pattern further implies that upper crustal shortening is the primary driver for the LMS' uplift during the Late Cenozoic. Nevertheless, middle-lower crustal deformation is difficult to be constrained by the exhumation histories, and its contribution to LMS' uplift cannot be ruled out.

  6. Radiotherapy-induced xerostomia, pre-clinical promise of LMS-611.

    PubMed

    Paterson, Claire; Caldwell, B; Porteous, S; McLean, A; Messow, C M; Thomson, M

    2016-02-01

    Radiotherapy-induced xerostomia (RIX) is the most common permanent side effect of radiotherapy (RT) to the head and neck (H&N). There is no effective topical treatment. LMS-611 is a mimetic of a natural lamellar body which prevents thick secretions like saliva from congesting organs. The primary objective of this study was to assess saliva properties before and during RT to the H&N. The secondary objectives were to re-assess saliva properties with the addition of LMS-611, measure inter-patient variability, correlate patient-reported symptoms with laboratory measurements and design subsequent first-in-human clinical trial of LMS-611. Patients with H&N cancer receiving RT as primary treatment were recruited. Patients completed the Groningen RIX (GRIX) questionnaire and provided saliva samples at baseline and weeks 2, 4 and 6 of RT. Saliva adhesiveness and viscosity were tested by measuring time taken to travel 5 cm down an inclined plane. Thirty patients were enrolled. The inclined plane test (IPT) results (s) were as follows: baseline 31.3, week 2 49.7, week 4 51.1 and week 6 55.7. Wide inter-patient variability was seen at baseline. GRIX scores increased as RT progressed. Spearman rank correlation coefficient of inclined plane tests with GRIX scores was -0.06 at baseline, 0.25 at week 2, 0.12 at week 4 and 0.08 at week 6. LMS-611 concentrations of 10 and 20 mg/ml significantly reduced IPT times on saliva samples. Saliva becomes more visco-adhesive and RIX worsens as RT progresses. There is little correlation between objective and subjective measures of RIX. The addition of LMS-611 to thick, sticky saliva restores its fluidity ex vivo. This warrants in vivo analysis of the effect of LMS-611 upon RIX.

  7. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  8. Computing Science and Statistics: Proceedings of the Symposium on Interface, 21-24 Apr 1991, Seattle, WA

    DTIC Science & Technology

    1991-01-01

    Auvt’r discordaint lairs . p’ f~~~Ilit’ trtthailitY if ;i Ittili x’ ii IHit’ s,;tilct sIpac.’ is fujr s~,re ’’uristartt A1 𔃺 and 0t < 1, A ’tark...reconstruction algorithms, usually of the filtered back-projection type, do 99mTcIIMPAO Thallium-201 not correct for nonuniform photon attenuation and depth

  9. Performance Comparison of Feature Extraction Algorithms for Target Detection and Classification

    DTIC Science & Technology

    2013-01-01

    Detection and Classification⋆ Soheil Bahrampour1 Asok Ray2 Soumalya Sarkar2 Thyagaraju Damarla3 Nasser M. Nasrabadi3 Keywords: Feature Extraction...USA email:soheil@psu.edu 2A. Ray and S. Sarkar are with the Department of Mechanical Engineering, Pennsylvania State University, University Park, PA...no. 1, pp. 22–29, 2001. [5] G. Mallapragada, A. Ray , and X. Jin, “Symbolic dynamic filtering and language measure for behavior identification of mobile

  10. Structural Acoustic UXO Detection and Identification in Marine Environments

    DTIC Science & Technology

    2016-05-01

    BOSS Buried Object Scanning Sonar DVL Doppler Velocity Log EW East/West IMU Inertial Measurement Unit NRL Naval Research Laboratory NSWC-PCD... Inertial Measurement Unit (IMU) to time-delay and coherently sum matched-filtered phase histories from subsurface focal points over a large number of... Measurement Unit (IMU) systems. In our imaging algorithm, the 2D depth image of a target, i.e. one mapped over x and z or y and z, presents the

  11. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  12. Picibanil (OK-432) in the treatment of head and neck lymphangiomas in children

    PubMed Central

    Rebuffini, Elena; Zuccarino, Luca; Grecchi, Emma; Carinci, Francesco; Merulla, Vittorio Emanuele

    2012-01-01

    Background: Picibanil (OK-432) is a lyophilized mixture of group A Streptococcus pyogenes with antineoplastic activity. Because of its capacity to produce a selective fibrosis of lymphangiomas (LMs), it has been approved by Japanese administration in 1995 for the treatment of LMs. Materials and Methods: We treated 15 children (age range: 6-60 months) affected by head and neck macrocystic LMs with intracystic injections (single dose of 0.2 mL) of Picibanil (1-3 injections). Results: Complete disappearance of the lesion was noticed in eight (53.33%) cases, a marked (>50%) reduction of LMs was found five (33.33%) cases, while a moderate (<50%) response was recorded in two (13.33%) cases. Picibanil side effects included fever, local inflammation, and transitory increase of blood platelets’ concentration; a single case of anemia was resolved with concentrated red blood cells transfusion. Conclusions: Intracystic injection of Picibanil is an effective and safe treatment for macrocystic LMs in pediatric patients and may represent the treatment of choice in such cases, especially where surgical excision is associated with the risk of functional/cosmetic side effects. PMID:23814582

  13. Picibanil (OK-432) in the treatment of head and neck lymphangiomas in children.

    PubMed

    Rebuffini, Elena; Zuccarino, Luca; Grecchi, Emma; Carinci, Francesco; Merulla, Vittorio Emanuele

    2012-12-01

    Picibanil (OK-432) is a lyophilized mixture of group A Streptococcus pyogenes with antineoplastic activity. Because of its capacity to produce a selective fibrosis of lymphangiomas (LMs), it has been approved by Japanese administration in 1995 for the treatment of LMs. We treated 15 children (age range: 6-60 months) affected by head and neck macrocystic LMs with intracystic injections (single dose of 0.2 mL) of Picibanil (1-3 injections). Complete disappearance of the lesion was noticed in eight (53.33%) cases, a marked (>50%) reduction of LMs was found five (33.33%) cases, while a moderate (<50%) response was recorded in two (13.33%) cases. Picibanil side effects included fever, local inflammation, and transitory increase of blood platelets' concentration; a single case of anemia was resolved with concentrated red blood cells transfusion. Intracystic injection of Picibanil is an effective and safe treatment for macrocystic LMs in pediatric patients and may represent the treatment of choice in such cases, especially where surgical excision is associated with the risk of functional/cosmetic side effects.

  14. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  15. One-dimensional error-diffusion technique adapted for binarization of rotationally symmetric pupil filters

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro

    1995-02-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.

  16. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  17. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  18. Adaptive Estimation of Multiple Fading Factors for GPS/INS Integrated Navigation Systems.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2017-06-01

    The Kalman filter has been widely applied in the field of dynamic navigation and positioning. However, its performance will be degraded in the presence of significant model errors and uncertain interferences. In the literature, the fading filter was proposed to control the influences of the model errors, and the H-infinity filter can be adopted to address the uncertainties by minimizing the estimation error in the worst case. In this paper, a new multiple fading factor, suitable for the Global Positioning System (GPS) and the Inertial Navigation System (INS) integrated navigation system, is proposed based on the optimization of the filter, and a comprehensive filtering algorithm is constructed by integrating the advantages of the H-infinity filter and the proposed multiple fading filter. Measurement data of the GPS/INS integrated navigation system are collected under actual conditions. Stability and robustness of the proposed filtering algorithm are tested with various experiments and contrastive analysis are performed with the measurement data. Results demonstrate that both the filter divergence and the influences of outliers are restrained effectively with the proposed filtering algorithm, and precision of the filtering results are improved simultaneously.

  19. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  20. Development of an improved MATLAB GUI for the prediction of coefficients of restitution, and integration into LMS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baca, Renee Nicole; Congdon, Michael L.; Brake, Matthew Robert

    In 2012, a Matlab GUI for the prediction of the coefficient of restitution was developed in order to enable the formulation of more accurate Finite Element Analysis (FEA) models of components. This report details the development of a new Rebound Dynamics GUI, and how it differs from the previously developed program. The new GUI includes several new features, such as source and citation documentation for the material database, as well as a multiple materials impact modeler for use with LMS Virtual.Lab Motion (LMS VLM), and a rigid body dynamics modeling software. The Rebound Dynamics GUI has been designed to workmore » with LMS VLM to enable straightforward incorporation of velocity-dependent coefficients of restitution in rigid body dynamics simulations.« less

  1. The effect of pre-test carbohydrate ingestion on the anaerobic threshold, as determined by the lactate-minimum test.

    PubMed

    Rotstein, Arie; Dotan, Raffy; Zigel, Levana; Greenberg, Tally; Benyamini, Yael; Falk, Bareket

    2007-12-01

    The purpose of this study was to investigate the effect of pre-test carbohydrate (CHO) ingestion on anaerobic-threshold assessment using the lactate-minimum test (LMT). Fifteen competitive male distance runners capable of running 10 km in 33.5-43 min were used as subjects. LMT was performed following CHO (2x300 mL, 7% solution) or comparable placebo (Pl) ingestion, in a double-blind, randomized order. The LMT consisted of two high-intensity 1 min treadmill runs (17-21 km.h(-1)), followed by an 8 min recovery period. Subsequently, subjects performed 5 min running stages, incremented by 0.6 km.h(-1) and separated by 1 min blood-sampling intervals. Tests were terminated after 3 consecutive increases in blood-lactate concentration ([La]) had been observed. Finger-tip capillary blood was sampled for [La] and blood-glucose determination 30 min before the test's onset, during the recovery phase following the 2 high-intensity runs, and following each of the subsequent 5 min stages. Heart rate (HR) and rating of perceived exertion (RPE) were recorded after each stage. The lactate-minimum speed (LMS) was determined from the individual [La]-velocity plots and was considered reflective of the anaerobic threshold. Pre-test CHO ingestion had no effect on LMS (13.19+/-1.12 km.h(-1) vs. 13.17+/-1.08 km.h(-1) in CHO and Pl, respectively), nor on [La] and glucose concentration at that speed, or on HR and RPE responses. Pre-test CHO ingestion therefore does not affect LMS or the LMT-estimated anaerobic threshold.

  2. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  3. De-Dopplerization of Acoustic Measurements

    DTIC Science & Technology

    2017-08-10

    band energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An...energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An equation...fractional octave representation and smearing that occurs within the spectrum11, digital filtering techniques were not considered by these earlier

  4. Prospective implementation of an algorithm for bedside intravascular ultrasound-guided filter placement in critically ill patients.

    PubMed

    Killingsworth, Christopher D; Taylor, Steven M; Patterson, Mark A; Weinberg, Jordan A; McGwin, Gerald; Melton, Sherry M; Reiff, Donald A; Kerby, Jeffrey D; Rue, Loring W; Jordan, William D; Passman, Marc A

    2010-05-01

    Although contrast venography is the standard imaging method for inferior vena cava (IVC) filter insertion, intravascular ultrasound (IVUS) imaging is a safe and effective option that allows for bedside filter placement and is especially advantageous for immobilized critically ill patients by limiting resource use, risk of transportation, and cost. This study reviewed the effectiveness of a prospectively implemented algorithm for IVUS-guided IVC filter placement in this high-risk population. Current evidence-based guidelines were used to create a clinical decision algorithm for IVUS-guided IVC filter placement in critically ill patients. After a defined lead-in phase to allow dissemination of techniques, the algorithm was prospectively implemented on January 1, 2008. Data were collected for 1 year using accepted reporting standards and a quality assurance review performed based on intent-to-treat at 6, 12, and 18 months. As defined in the prospectively implemented algorithm, 109 patients met criteria for IVUS-directed bedside IVC filter placement. Technical feasibility was 98.1%. Only 2 patients had inadequate IVUS visualization for bedside filter placement and required subsequent placement in the endovascular suite. Technical success, defined as proper deployment in an infrarenal position, was achieved in 104 of the remaining 107 patients (97.2%). The filter was permanent in 21 (19.6%) and retrievable in 86 (80.3%). The single-puncture technique was used in 101 (94.4%), with additional dual access required in 6 (5.6%). Periprocedural complications were rare but included malpositioning requiring retrieval and repositioning in three patients, filter tilt >/=15 degrees in two, and arteriovenous fistula in one. The 30-day mortality rate for the bedside group was 5.5%, with no filter-related deaths. Successful placement of IVC filters using IVUS-guided imaging at the bedside in critically ill patients can be established through an evidence-based prospectively implemented algorithm, thereby limiting the need for transport in this high-risk population. Copyright (c) 2010 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  5. A deblocking algorithm based on color psychology for display quality enhancement

    NASA Astrophysics Data System (ADS)

    Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin

    2012-12-01

    This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.

  6. MR fingerprinting reconstruction with Kalman filter.

    PubMed

    Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping

    2017-09-01

    Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  8. Flatness-based model inverse for feed-forward braking control

    NASA Astrophysics Data System (ADS)

    de Vries, Edwin; Fehn, Achim; Rixen, Daniel

    2010-12-01

    For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.

  9. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    PubMed

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

  10. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  12. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  13. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  14. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  15. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    NASA Astrophysics Data System (ADS)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the spatial resolution bar patterns demonstrated that the BONE (GE) and B46f (Siemens) showed higher spatial resolution compared to the STANDARD (GE) or B30f (Siemens) reconstruction algorithms typically used for routine body CT imaging. Only the sharper images were deemed clinically acceptable for the evaluation of diffuse lung disease (e.g. emphysema). Quantitative analyses of the extent of emphysema in patient data showed the percent volumes above the -950 HU threshold as 9.4% for the BONE reconstruction, 5.9% for the STANDARD reconstruction, and 4.7% for the BONE filtered images. Contrary to the practice of using standard resolution CT images for the quantitation of diffuse lung disease, these data demonstrate that a single sharp reconstruction (BONE/B46f) should be used for both the qualitative and quantitative evaluation of diffuse lung disease. The sharper reconstruction images, which are required for diagnostic interpretation, provide accurate CT numbers over the range of -1000 to +900 HU and preserve the fidelity of small structures in the reconstructed images. A filtered version of the sharper images can be accurately substituted for images reconstructed with smoother kernels for comparison to previously published results.

  16. A hand tracking algorithm with particle filter and improved GVF snake model

    NASA Astrophysics Data System (ADS)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  17. Software Technology Readiness Assessment. Defense Acquisition Guidance with Space Examples

    DTIC Science & Technology

    2010-04-01

    are never Software CTE candidates 19 Algorithm Example: Filters • Definitions – Filters in Signal Processing • A filter is a mathematical algorithm...Segment Segment • SOA as a CTE? – Google produced 40 million (!) hits in 0.2 sec for “SOA”. Even if we discount hits on the Society of Actuaries and

  18. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.

  19. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  20. Layered synthetic microstructures as Bragg diffractors for X rays and extreme ultraviolet - Theory and predicted performance

    NASA Technical Reports Server (NTRS)

    Underwood, J. H.; Barbee, T. W., Jr.

    1981-01-01

    The theory of X-ray diffraction by periodic structures is applied to the layered synthetic microstructures (LSMs) made possible by recent developments in thin film technology, and approximate formulas for estimating their performance are presented. A more complete computation scheme based on optical multilayer theory is also described, and it is shown that the diffracting properties may be tailored to specific applications by adjusting the refractive indices and thicknesses of the component layers. The theory may be modified to take account of imperfections in the LMS structure, and the properties of nonperiodic structures thereby computed. Structures with high integrated reflectivity constructed according to the methods defined have potential application in many areas of X-ray or EUV research and instrumentation.

  1. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  2. Both Baseline and Change in Lower Limb Muscle Strength in Younger Women Are Independent Predictors of Balance in Middle Age: A 12-Year Population-Based Prospective Study.

    PubMed

    Wu, Feitong; Callisaya, Michele; Wills, Karen; Laslett, Laura L; Jones, Graeme; Winzenberg, Tania

    2017-06-01

    Poor balance is a risk factor for falls and fracture in older adults, but little is known about modifiable factors affecting balance in younger women. This study aimed to examine whether lower limb muscle strength (LMS) in young women and changes in LMS are independent predictors of balance in middle age. This was an observational 10-year follow-up of 470 women aged 25 to 44 years at baseline who had previously participated in a 2-year population-based randomized controlled trial of osteoporosis education interventions. Linear regression was used to examine the association between baseline LMS (by dynamometer) and change in LMS over 12 years with balance at 12 years (timed up and go test [TUG], step test [ST], functional reach test [FRT], and lateral reach test [LRT]). LMS declined by a mean of 17.3 kg over 12 years. After adjustment for potential confounders, baseline and change in LMS were independently beneficially associated with TUG (β = -0.008 sec/kg, 95% confidence interval [CI] -0.01 to -0.006, and β = -0.006 sec/kg, 95% CI -0.009 to -0.003 for baseline and change, respectively), FRT (β = 0.057 cm/kg, 95% CI 0.030 to 0.084, and β = 0.071 cm/kg, 95% CI 0.042 to 0.101, respectively), and LRT (β = 0.030 cm/kg, 95% CI 0.012 to 0.049, and β = 0.022 cm/kg, 95% CI 0.002 to 0.043, respectively) 12 years later. There was an association between baseline LMS and ST (β = 0.044 steps/kg, 95% CI 0.022 to 0.067) but not between change in LMS and ST. Among young women, greater LMS at baseline and slower decline over time are both associated with better balance in midlife. Analogous to the contributions of peak bone mass and bone loss to fracture risk in older adults, this suggests that both improvement of muscle strength in younger age and prevention of age-related loss of muscle strength could be potentially useful strategies to improve balance and reduce falls in later life. © 2017 American Society for Bone and Mineral Research. © 2017 American Society for Bone and Mineral Research.

  3. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  4. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  5. Computational segmentation of collagen fibers from second-harmonic generation images of breast cancer

    NASA Astrophysics Data System (ADS)

    Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.

    2014-01-01

    Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.

  6. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  7. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  8. Application of velocity filtering to optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  9. FIR filters for hardware-based real-time multi-band image blending

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Leblebici, Yusuf

    2015-02-01

    Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.

  10. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  11. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  12. Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA

    PubMed Central

    de Souza, Alisson C. D.; Fernandes, Marcelo A. C.

    2014-01-01

    This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918

  13. A Novel Attitude Determination Algorithm for Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2007-01-01

    This paper presents a single frame algorithm for the spin-axis orientation-determination of spinning spacecraft that encounters no ambiguity problems, as well as a simple Kalman filter for continuously estimating the full attitude of a spinning spacecraft. The later algorithm is comprised of two low order decoupled Kalman filters; one estimates the spin axis orientation, and the other estimates the spin rate and the spin (phase) angle. The filters are ambiguity free and do not rely on the spacecraft dynamics. They were successfully tested using data obtained from one of the ST5 satellites.

  14. Real-time digital filtering, event triggering, and tomographic reconstruction of JET soft x-ray data (abstract)

    NASA Astrophysics Data System (ADS)

    Edwards, A. W.; Blackler, K.; Gill, R. D.; van der Goot, E.; Holm, J.

    1990-10-01

    Based upon the experience gained with the present soft x-ray data acquisition system, new techniques are being developed which make extensive use of digital signal processors (DSPs). Digital filters make 13 further frequencies available in real time from the input sampling frequency of 200 kHz. In parallel, various algorithms running on further DSPs generate triggers in response to a range of events in the plasma. The sawtooth crash can be detected, for example, with a delay of only 50 μs from the onset of the collapse. The trigger processor interacts with the digital filter boards to ensure data of the appropriate frequency is recorded throughout a plasma discharge. An independent link is used to pass 780 and 24 Hz filtered data to a network of transputers. A full tomographic inversion and display of the 24 Hz data is carried out in real time using this 15 transputer array. The 780 Hz data are stored for immediate detailed playback following the pulse. Such a system could considerably improve the quality of present plasma diagnostic data which is, in general, sampled at one fixed frequency throughout a discharge. Further, it should provide valuable information towards designing diagnostic data acquisition systems for future long pulse operation machines when a high degree of real-time processing will be required, while retaining the ability to detect, record, and analyze events of interest within such long plasma discharges.

  15. Outcome of inferior vena cava and noncaval venous leiomyosarcomas.

    PubMed

    Illuminati, Giulio; Pizzardi, Giulia; Calio', Francesco; Pacilè, Maria Antonietta; Masci, Federica; Vietri, Francesco

    2016-02-01

    Leiomyosarcoma (LMS) is a rare tumor arising from the smooth muscle cells of arteries and veins. LMS may affect both the inferior vena cava (IVC) and non-IVC veins. Because of its rarity, the experience with the outcome of the disease originating from the IVC compared with that with non-IVC offspring is overall limited. In this study, we compared the clinical features and outcomes after operative resection of IVC and non-IVC LMS to detect possible significant differences that could affect treatment and prognosis. Twenty-seven patients undergoing operative resection of a venous LMS at a single tertiary care center and one secondary care hospital were reviewed retrospectively and divided into 2 groups: IVC-LMS (Group A, n = 18) and non-IVC LMS (Group B, n = 9). As primary end points, postoperative mortality and morbidity, disease-specific survival and, if applicable, patency of venous reconstruction were considered. Bivariate differences were compared with the χ(2) test. Disease-specific survival was expressed by a life-table analysis and compared using the log-rank test. No postoperative mortality was observed in either group. Postoperative morbidity was 28% in group A and 11% in group B (P = .33). The mean duration of follow-up was 60 months (range, 13-140). Disease-specific survival was 60% in group A and 75% in group B at 3 years (P = .48), and it was 54% in group A and 62% in group B at 5 years (P = .63). Seven grafts were occluded in group A (39%) and 1of 3 were occluded in group B (33%) (P = .85). IVC and non-IVC LMS exhibit similar outcomes in terms of postoperative course and survival. Operative resection associated with vascular reconstruction, if applicable, eventually followed by radiation and chemotherapy may be curative and is associated with good functional results. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Evaluation of Local Media Surveillance for Improved Disease Recognition and Monitoring in Global Hotspot Regions

    PubMed Central

    Schwind, Jessica S.; Wolking, David J.; Brownstein, John S.; Mazet, Jonna A. K.; Smith, Woutrina A.

    2014-01-01

    Digital disease detection tools are technologically sophisticated, but dependent on digital information, which for many areas suffering from high disease burdens is simply not an option. In areas where news is often reported in local media with no digital counterpart, integration of local news information with digital surveillance systems, such as HealthMap (Boston Children’s Hospital), is critical. Little research has been published in regards to the specific contribution of local health-related articles to digital surveillance systems. In response, the USAID PREDICT project implemented a local media surveillance (LMS) pilot study in partner countries to monitor disease events reported in print media. This research assessed the potential of LMS to enhance digital surveillance reach in five low- and middle-income countries. Over 16 weeks, select surveillance system attributes of LMS, such as simplicity, flexibility, acceptability, timeliness, and stability were evaluated to identify strengths and weaknesses in the surveillance method. Findings revealed that LMS filled gaps in digital surveillance network coverage by contributing valuable localized information on disease events to the global HealthMap database. A total of 87 health events were reported through the LMS pilot in the 16-week monitoring period, including 71 unique reports not found by the HealthMap digital detection tool. Furthermore, HealthMap identified an additional 236 health events outside of LMS. It was also observed that belief in the importance of the project and proper source selection from the participants was crucial to the success of this method. The timely identification of disease outbreaks near points of emergence and the recognition of risk factors associated with disease occurrence continue to be important components of any comprehensive surveillance system for monitoring disease activity across populations. The LMS method, with its minimal resource commitment, could be one tool used to address the information gaps seen in global ‘hot spot’ regions. PMID:25333618

  17. Crustal structure and deformation under the Longmenshan and its surroundings revealed by receiver function data

    NASA Astrophysics Data System (ADS)

    Sun, Ya; Liu, Jianxin; Zhou, Keping; Chen, Bo; Guo, Rongwen

    2015-07-01

    The convergence of India and Eurasia and the obstruction from the rigid Sichuan Basin cause the Longmenshan (LMS) to have the steepest topographic gradient at the eastern margin of the Tibetan Plateau. However, the mechanisms of surface uplift are still controversial. In this paper, we estimate the crustal structure and deformation under the LMS and its surroundings by analyzing a large amount of receiver function data recorded by regional seismic networks of the China Earthquake Administration. We apply a comprehensive splitting measurement technique on Ps conversion phase at the Moho (Moho Ps splitting) to calculate crustal anisotropy from azimuthal variations of receiver functions. Our results show that most of the seismic stations beneath the LMS area exhibit significant seismic anisotropy with the splitting time of 0.22-0.94 s and a fast polarization direction of NW-SE, while less or even no crustal anisotropy has been observed under the Sichuan Basin. Comparing the fast polarization directions of Moho Ps splitting with the indicators of lithospheric deformation (such as shear wave splitting, absolute plate motion, and global positioning system) imply a consistent tendency of deformation between the lower crust and upper mantle, but decoupling deformation in the crust beneath the LMS area. We further compare Moho Ps splitting time to that estimated from previous SKS splitting, indicating that crustal anisotropy is an important source of the SKS splitting time in this study area. In addition, a thick crust (>50 km) with high Vp/Vs values (1.74-1.86) is also observed using the H-κ stacking method. These seismic observations are consistent with the scenario that the LMS area has been built by the lower crustal flow. Combined with the seismic reflection/refraction profile and geology studies, we further suggest that the lower crustal flow may extrude upward into the upper crust along the steeply dipping strike faults under the LMS area, resulting in the surface uplift of the LMS.

  18. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  19. Angular-Rate Estimation Using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quatemion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quatemion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quatemion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  20. Angular-Rate Estimation using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, Itzhack Y.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quaternion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  1. Significant radiative impact of volcanic aerosol in the lowermost stratosphere

    PubMed Central

    Andersson, Sandra M.; Martinsson, Bengt G.; Vernier, Jean-Paul; Friberg, Johan; Brenninkmeijer, Carl A. M.; Hermann, Markus; van Velthoven, Peter F. J.; Zahn, Andreas

    2015-01-01

    Despite their potential to slow global warming, until recently, the radiative forcing associated with volcanic aerosols in the lowermost stratosphere (LMS) had not been considered. Here we study volcanic aerosol changes in the stratosphere using lidar measurements from the NASA CALIPSO satellite and aircraft measurements from the IAGOS-CARIBIC observatory. Between 2008 and 2012 volcanism frequently affected the Northern Hemisphere stratosphere aerosol loadings, whereas the Southern Hemisphere generally had loadings close to background conditions. We show that half of the global stratospheric aerosol optical depth following the Kasatochi, Sarychev and Nabro eruptions is attributable to LMS aerosol. On average, 30% of the global stratospheric aerosol optical depth originated in the LMS during the period 2008–2011. On the basis of the two independent, high-resolution measurement methods, we show that the LMS makes an important contribution to the overall volcanic forcing. PMID:26158244

  2. Extending LMS to Support IRT-Based Assessment Test Calibration

    NASA Astrophysics Data System (ADS)

    Fotaris, Panagiotis; Mastoras, Theodoros; Mavridis, Ioannis; Manitsaris, Athanasios

    Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.

  3. [Epithelioid leiomyosarcoma of the stomach. Clinical experiences with a rare stomach tumor].

    PubMed

    Hauser, H; Steindorfer, P; Mischinger, H J; Thalhammer, M; Kronberger, L; Rosanelli, G; Lax, S F

    1995-01-01

    Gastric epithelioid leiomyosarcoma (epLMS), which generally occurs in mid- or late adult life, is a rare smooth muscle tumor of the stomach. Out of 25 soft tissue tumors of the stomach operated at the Department of Surgery, University of Graz, two epLMS were diagnosed. This paper presents the case of a 67-year-old male with an epLMS in the corpus and of a 80-year-old female with an epLMS in the fundus of the stomach. The tumors were not diagnosed by gastroscopy; they were localized by sonography and CT-scan. In both cases the tumor was completely removed surgically, using a TA 90 4.8 mm respectively a TA 55 4.8 mm stapler. Diagnosis was reached by histological and immunohistochemical examination of the tumor tissue. Surgical excision with wide tumor-free resection margins is the therapy of choice in this tumor group.

  4. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  5. Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Jiang, Xiao-Guo

    2012-09-01

    To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.

  6. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction.

    PubMed

    Konkle, Justin J; Goodwill, Patrick W; Hensley, Daniel W; Orendorff, Ryan D; Lustig, Michael; Conolly, Steven M

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications.

  7. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  8. A comparison study of the start-up of a MnOx filter for catalytic oxidative removal of ammonium from groundwater and surface water.

    PubMed

    Cheng, Ya; Li, Ye; Huang, Tinglin; Sun, Yuankui; Shi, Xinxin; Shao, Yuezong

    2018-03-01

    As an efficient method for ammonium (NH 4 + ) removal, contact catalytic oxidation technology has drawn much attention recently, due to its good low temperature resistance and short start-up period. Two identical filters were employed to compare the process for ammonium removal during the start-up period for ammonium removal in groundwater (Filter-N) and surface water (Filter-S) treatment. Two types of source water (groundwater and surface water) were used as the feed waters for the filtration trials. Although the same initiating method was used, Filter-N exhibited much better ammonium removal performance than Filter-S. The differences in catalytic activity among these two filters were probed using X-ray diffraction (XRD), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and compositional analysis. XRD results indicated that different manganese oxide species were formed in Filter-N and Filter-S. Furthermore, the Mn3p XPS spectra taken on the surface of the filter films revealed that the average manganese valence of the inactive manganese oxide film collected from Filter-S (FS-MnO x ) was higher than in the film collected from Filter-N (FN-MnO x ). Mn(IV) was identified as the predominant oxidation state in FS-MnO x and Mn(III) was identified as the predominant oxidation state in FN-MnO x . The results of compositional analyses suggested that polyaluminum ferric chloride (PAFC) used during the surface water treatment was an important factor in the mineralogy and reactivity of MnO x . This study provides the theoretical basis for promoting the wide application of the technology and has great practical significance. Copyright © 2017. Published by Elsevier B.V.

  9. Evaluating Learning Management System (LMS)-facilitated Delivery of Universal Design for Learning (UDL)

    NASA Astrophysics Data System (ADS)

    Bryans Bongey, Sarah

    This quantitative study involved 157 students in two sections of an undergraduate class in general biology, as well as one instructor who taught both sections of the course. It used resources from the Center for Applied Special Technologies (CAST) to evaluate the viability of a Learning Management System (LMS) to provide Universal Design for Learning (UDL). It also measured and tracked the instructor's level of efficacy in sustaining UDL approaches throughout the semester. In an effort to identify the UDL's specific outcomes or benefits to students, this study used a pre- and post- test to identify the treatment's impact on student engagement. Findings indicated that the LMS could be designed to comply with UDL guidelines, and the instructor was able to establish a high level of efficacy in maintaining that UDL design. However, based on the statistical analysis of pre- and post-test responses from control vs. treatment groups of students, the treatment was seen to have no significant effect in the area of student engagement. Overall, the study added to the literature by suggesting (a) the viability of the LMS as a means of providing UDL approaches, (b) the promise of the LMS as a tool faculty can use to deliver UDL with a high level of efficacy, and (c) the design's lack of effect in the area of student engagement. The fact that this study was limited to a single brand of LMS (Blackboard), a single instructor, and a single group of students underscores the need for further research.

  10. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  11. Importance of granulometry on phase evolution and phase-to-phase relationships of experimentally burned impure limestones intended for production of hydraulic lime and/or natural cement

    NASA Astrophysics Data System (ADS)

    Kozlovcev, Petr; Přikryl, Richard; Přikrylová, Jiřina

    2015-04-01

    In contrast to modern ordinary Portland cement production from finely ground raw material blends, ancient burning of hydraulic lime was conducted by burning larger pieces of natural raw material. Due to natural variability of raw material composition, exploitation of different beds from even one formation can result the product with significantly different composition and/or properties. Prague basin (Neoproterozoic to pre-Variscan Palaeozoic of the central part of the Bohemian Massif - the so-called Barrandian area, Czech Republic) represents a classical example of the limestone-rich region with long-term history of limestone burning for quick lime and/or various types of hydraulic binders. Due to the fact that burning of natural hydraulic lime has been abandoned in this region at the turn of 19th/20th c., significant gap in knowledge on the behavior of various limestone types and on the influence of minor variance in composition on the quality of burned product is encountered. Moreover, the importance of employment of larger pieces of raw material for burning for the development of proper phase-to-phase relationships (i.e. development of hydraulic phases below sintering temperature at mutual contacts of minerals) has not been examined before. To fill this gap, a representative specimens of major limestone types from the Prague basin have been selected for experimental study: Upper Silurian limestone types (Přídolí and Kopanina Lms.), and Lower Devonian limestones (Radotín, Kotýs, Řeporyje, Dvorce-Prokop, and Zlíchov Lms.). Petrographic character of the experimental material was examined by polarizing microscopy, cathodoluminescence, scanning electron microscopy with an energy dispersive spectrometer (SEM-EDS), and X-ray diffraction (XRD) of insoluble residue. Based on the data from wet silicate analyses, modal composition of studied impure limestones was computed. Experimental raw material was burned in laboratory electric furnace at 1000 and 1200°C for 3 and/or 6 hours. Burned samples were examined by XRD for phase composition and by SEM-EDS for phase-to-phase relationships due to the burning. Based on our data it is evident that larnite-belite (dicalcium-silicate) is dominant phase in burned silica-rich limestones (represented by e.g. Dvorce-Prokop, Přídolí and/or Kopanina Lms.). In clay-rich limestones containing kaolinite and illite, gehlenite and other calcium aluminates and aluminosilicates were detected (represented by Kosoř, Řeporyje, and/or a portion of Dvorce-Prokop Lms.). Due to higher proportion of Fe-oxihydroxides in the Řeporyje Lms., brownmillerite (calcium aluminoferrite) forms as a typical minor phases during burning. Free-lime (plus its hydrated form - portlandite) makes dominant phase in limestones exhibiting low non-carbonate admixture (Kotýs and/or a portion of Kopanina Lms.). These results clearly demonstrate that presence of certain non-carbonate minerals governs formation of certain hydraulic phases in burned product, whilst mutual proportions of individual minerals in raw materials influence amount of newly formed phases.

  12. Feasibility of laser marking in Barrett's esophagus with volumetric laser endomicroscopy: first-in-man pilot study.

    PubMed

    Swager, Anne-Fré; de Groof, Albert J; Meijer, Sybren L; Weusten, Bas L; Curvers, Wouter L; Bergman, Jacques J

    2017-09-01

    Volumetric laser endomicroscopy (VLE) provides a circumferential scan of the esophageal wall layers and has potential to improve detection of neoplasia in Barrett's esophagus (BE). The novel VLE laser marking system enables direct in vivo marking of suspicious areas as identified on VLE. These laser marked areas can subsequently be targeted for biopsies. The aim was to evaluate the visibility and positional accuracy of laser marks (LMs) in different esophageal tissue types on white light endoscopy (WLE) and VLE. Patients with BE with or without neoplasia underwent imaging with VLE. Protocol refinements were practiced in a learning phase. In the second phase, visibility of LMs was assessed by random marking in squamous, BE, and gastric tissue. In phase 3, positional accuracy of the LMs was tested by identifying and laser marking surrogate targets (endoscopically placed cautery marks). In the final phase, the most suspicious areas for neoplasia were identified in each patient using VLE, targeted by LMs, and biopsy samples subsequently obtained. Sixteen patients with BE were included (14 men; median age, 68 years), 1 of whom was included twice in different study phases. Worst histologic diagnoses were 9 non-dysplastic Barrett's esophagus (NDBE), 3 low-grade dysplasia (LGD), 4 high-grade dysplasia (HGD), and 1 early adenocarcinoma (EAC). In total, 222 LMs were placed, of which 97% was visible on WLE. All LMs were visible on VLE directly after marking, and 86% could be confirmed during post hoc analysis. LM targeting was successful with positional accuracy in 85% of cautery marks. Inaccurate targeting was caused by system errors or difficult cautery mark visualization on VLE. In the final phase (5 patients), 18 areas suspicious on VLE were identified, which were all successfully targeted by LMs (3 EAC, 3 HGD, 1 LGD, and 11 NDBE). Mean VLE procedure time was 22 minutes (±6 minutes standard deviation); mean endoscopy time was 56 minutes (±17 minutes). No adverse events were reported. This first-in-human study of VLE-guided laser marking was found to be feasible and safe in 17 procedures. Most LMs were visible on WLE and VLE. Targeting VLE areas of interest proved to be highly successful. VLE-guided laser marking may improve the detection and delineation of Barrett's neoplasia in the future. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  13. Directly-deposited blocking filters for high-performance silicon x-ray detectors

    NASA Astrophysics Data System (ADS)

    Bautz, M.; Kissel, S.; Masterson, R.; Ryu, K.; Suntharalingam, V.

    2016-07-01

    Silicon X-ray detectors often require blocking filters to mitigate noise and out-of-band signal from UV and visible backgrounds. Such filters must be thin to minimize X-ray absorption, so direct deposition of filter material on the detector entrance surface is an attractive approach to fabrication of robust filters. On the other hand, the soft (E < 1 keV) X-ray spectral resolution of the detector is sensitive to the charge collection efficiency in the immediate vicinity of its entrance surface, so it is important that any filter layer is deposited without disturbing the electric field distribution there. We have successfully deposited aluminum blocking filters, ranging in thickness from 70 to 220nm, on back-illuminated CCD X-ray detectors passivated by means of molecular beam epitaxy. Here we report measurements showing that directly deposited filters have little or no effect on soft X-ray spectral resolution. We also find that in applications requiring very large optical density (> OD 6) care must be taken to prevent light from entering the sides and mounting surfaces of the detector. Our methods have been used to deposit filters on the detectors of the REXIS instrument scheduled to fly on OSIRIS-ReX later this year.

  14. Nonlocal variational model and filter algorithm to remove multiplicative noise

    NASA Astrophysics Data System (ADS)

    Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi

    2010-07-01

    The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.

  15. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  16. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  17. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  18. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  19. Systolic Signal Processor/High Frequency Direction Finding

    DTIC Science & Technology

    1990-10-01

    MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description

  20. Stable Kalman filters for processing clock measurement data

    NASA Technical Reports Server (NTRS)

    Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.

    1989-01-01

    Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.

  1. Atmospheric electricity/meteorology analysis

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard; Buechler, Dennis

    1993-01-01

    This activity focuses on Lightning Imaging Sensor (LIS)/Lightning Mapper Sensor (LMS) algorithm development and applied research. Specifically we are exploring the relationships between (1) global and regional lightning activity and rainfall, and (2) storm electrical development, physics, and the role of the environment. U.S. composite radar-rainfall maps and ground strike lightning maps are used to understand lightning-rainfall relationships at the regional scale. These observations are then compared to SSM/I brightness temperatures to simulate LIS/TRMM multi-sensor algorithm data sets. These data sets are supplied to the WETNET project archive. WSR88-D (NEXRAD) data are also used as it becomes available. The results of this study allow us to examine the information content from lightning imaging sensors in low-earth and geostationary orbits. Analysis of tropical and U.S. data sets continues. A neural network/sensor fusion algorithm is being refined for objectively associating lightning and rainfall with their parent storm systems. Total lightning data from interferometers are being used in conjunction with data from the national lightning network. A 6-year lightning/rainfall climatology has been assembled for LIS sampling studies.

  2. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    NASA Astrophysics Data System (ADS)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  3. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  4. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  5. Hybrid employment recommendation algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  6. Optimizing of a high-order digital filter using PSO algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Fuchun

    2018-04-01

    A self-adaptive high-order digital filter, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance, is presented in this paper. The parameters of traditional digital filter are mainly tuned by complex calculation, whereas this paper presents a 5th order digital filter to obtain outstanding performance and the parameters of the proposed filter are optimized by swarm intelligent algorithm. Simulation results with respect to the proposed 5th order digital filter, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed 5th order digital is analyzed.

  7. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  8. New color-based tracking algorithm for joints of the upper extremities

    NASA Astrophysics Data System (ADS)

    Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang

    2007-11-01

    To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.

  9. Toward Directly-Deposited Optical Blocking Filters for High-performance, Back-illuminated Imaging X-ray Detectors

    NASA Astrophysics Data System (ADS)

    Bautz, Mark W.; Kissel, S. E.; Ryu, K.; Suntharalingam, V.

    2014-01-01

    Silicon X-ray detectors require optical blocking filters to prevent out-of-band (UV, visible and near-IR) radiation from corrupting the X-ray signal. Traditionally, blocking filters have been deposited on thin, free-standing membranes suspended over the detector. Free-standing filters are fragile, however, and in past instruments have required heavy and complex vacuum housings to protect them from acoustic loads during ground operations and launch. A directly-deposited blocking filter greatly simplifies the instrument and in principle permits better soft X-ray detection efficiency than a traditional free-standing filter. Directly-deposited filters have flown in previous generation instruments (e.g. the XMM/Newton Reflection Grating Spectrometer) but none has yet been demonstrated on a modern, high-performance back-illuminated X-ray CCD. We report here on the status of our NASA-funded Strategic Astrophysics Technology program to demonstrate such filters.

  10. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  11. Langerian mindfulness, quality of life and psychological symptoms in a sample of Italian students.

    PubMed

    Pagnini, Francesco; Bercovitz, Katherine E; Phillips, Deborah

    2018-02-06

    Noticing new things, accepting the continuously changing nature of circumstances, and flexibly shifting perspectives in concert with changing contexts constitute the essential features of Langerian mindfulness. This contrasts with a "mindless" approach in which one remains fixed in a singular mindset and is closed off to new possibilities. Despite potentially important clinical applications for this construct, few studies have explored them. The instrument developed to measure Langerian mindfulness is the Langer Mindfulness Scale (LMS), although this tool has been limited primarily to English-speaking populations. The study aimed to test LMS validity in the Italian language and to analyze the relationships between Langerian mindfulness and well-being. We translated the LMS into Italian, analyzed its factor structure, and investigated the correlation between mindfulness and quality of life and psychological well-being in a sample of 248 Italian students (88.7% females, mean age 20.05). A confirmatory factor analysis confirmed the tri-dimensional structure of the English LMS in the Italian version. The primary analysis found a significant negative correlation between mindfulness and psychological symptoms including obsessive-compulsive tendencies, depression, anxiety, and paranoid ideation. There was also a positive correlation between mindfulness and reports of quality of life. The Italian LMS appears reliable and it shows relevant correlations with well-being.

  12. Corporate knowledge repository: Adopting academic LMS into corporate environment

    NASA Astrophysics Data System (ADS)

    Bakar, Muhamad Shahbani Abu; Jalil, Dzulkafli

    2017-10-01

    The growth of Knowledge Economy has transformed human capital to be the vital asset in business organization of the 21st century. Arguably, due to its white-collar nature, knowledge-based industry is more favorable than traditional manufacturing business. However, over dependency on human capital can also be a major challenge as any workers will inevitably leave the company or retire. This situation will possibly create knowledge gap that may impact business continuity of the enterprise. Knowledge retention in the corporate environment has been of many research interests. Learning Management System (LMS) refers to the system that provides the delivery, assessment and management tools for an organization to handle its knowledge repository. By using the aspirations of a proven LMS implemented in an academic environment, this paper proposes LMS model that can be used to enable peer-to-peer knowledge capture and sharing in the knowledge-based organization. Cloud Enterprise Resource Planning (ERP), referred to an ERP solution in the internet cloud environment was chosen as the domain knowledge. The complexity of the Cloud ERP business and its knowledge make it very vulnerable to the knowledge retention problem. This paper discusses how the company's essential knowledge can be retained using the LMS system derived from academic environment into the corporate model.

  13. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  14. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1994-01-01

    A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.

  15. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  16. A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Xia, Mingliang; Xuan, Li

    2013-10-01

    The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.

  17. Photon counting x-ray imaging with K-edge filtered x-rays: A simulation study.

    PubMed

    Atak, Haluk; Shikhaliev, Polad M

    2016-03-01

    In photon counting (PC) x-ray imaging and computed tomography (CT), the broad x-ray spectrum can be split into two parts using an x-ray filter with appropriate K-edge energy, which can improve material decomposition. Recent experimental study has demonstrated substantial improvement in material decomposition with PC CT when K-edge filtered x-rays were used. The purpose of the current work was to conduct further investigations of the K-edge filtration method using comprehensive simulation studies. The study was performed in the following aspects: (1) optimization of the K-edge filter for a particular imaging configuration, (2) effects of the K-edge filter parameters on material decomposition, (3) trade-off between the energy bin separation, tube load, and beam quality with K-edge filter, (4) image quality of general (unsubtracted) images when a K-edge filter is used to improve dual energy (DE) subtracted images, and (5) improvements with K-edge filtered x-rays when PC detector has limited energy resolution. The PC x-ray images of soft tissue phantoms with 15 and 30 cm thicknesses including iodine, CaCO3, and soft tissue contrast materials, were simulated. The signal to noise ratio (SNR) of the contrast elements was determined in general and material-decomposed images using K-edge filters with different atomic numbers and thicknesses. The effect of the filter atomic number and filter thickness on energy separation factor and SNR was determined. The boundary conditions for the tube load and halfvalue layer were determined when the K-edge filters are used. The material-decomposed images were also simulated using PC detector with limited energy resolution, and improvements with K-edge filtered x-rays were quantified. The K-edge filters with atomic numbers from 56 to 71 and K-edge energies 37.4-63.4 keV, respectively, can be used for tube voltages from 60 to 150 kVp, respectively. For a particular tube voltage of 120 kVp, the Gd and Ho were the optimal filter materials to achieve highest SNR. For a particular K-edge filter of Gd and tube voltage of 120 kVp, the filter thickness 0.6 mm provided maximum SNR for considered imaging applications. While K-edge filtration improved SNR of CaCO3 and iodine by 41% and 36%, respectively, in DE subtracted images, it did not deteriorate SNR in general images. For x-ray imaging with nonideal PC detector, the positive effect of the K-edge filter was increased when FWHM energy resolution was degraded, and maximum improvement was at 60% FWHM. This study has shown that K-edge filtered x-rays can provide substantial improvements of material selective PC x-ray and CT imaging for nearly all imaging applications using 60-150 kVp tube voltages. Potential limitations such as tube load, beam hardening, and availability of filter material were shown to not be critical.

  18. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    PubMed

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  19. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  20. Breast cancer detection using time reversal

    NASA Astrophysics Data System (ADS)

    Sheikh Sajjadieh, Mohammad Hossein

    Breast cancer is the second leading cause of cancer death after lung cancer among women. Mammography and magnetic resonance imaging (MRI) have certain limitations in detecting breast cancer, especially during its early stage of development. A number of studies have shown that microwave breast cancer detection has potential to become a successful clinical complement to the conventional X-ray mammography. Microwave breast imaging is performed by illuminating the breast tissues with an electromagnetic waveform and recording its reflections (backscatters) emanating from variations in the normal breast tissues and tumour cells, if present, using an antenna array. These backscatters, referred to as the overall (tumour and clutter) response, are processed to estimate the tumour response, which is applied as input to array imaging algorithms used to estimate the location of the tumour. Due to changes in the breast profile over time, the commonly utilized background subtraction procedures used to estimate the target (tumour) response in array processing are impractical for breast cancer detection. The thesis proposes a new tumour estimation algorithm based on a combination of the data adaptive filter with the envelope detection filter (DAF/EDF), which collectively do not require a training step. After establishing the superiority of the DAF/EDF based approach, the thesis shows that the time reversal (TR) array imaging algorithms outperform their conventional conterparts in detecting and localizing tumour cells in breast tissues at SNRs ranging from 15 to 30dB.

  1. Optical and thermogravimetric analysis of Zn1-xCuxS/PVA nanocomposite films

    NASA Astrophysics Data System (ADS)

    Mohamed, Mohamed Bakr; Heiba, Zein K.; Imam, N. G.

    2018-07-01

    Cu doped ZnS nanoparticles with cubic blend structure had been prepared successfully through thermolysis route and then composited with poly vinyl alcohol using casting method. Zn1-xCuxS/PVA nanocomposites were characterized using different characterization techniques. The quantum dot nature of the ZnS:Cu phase was confirmed by transmission electron microscope technique. Thermal stability was studied by thermogravimetric analysis. The ultra violet measurements illustrated that addition of Zn1-xCuxS nanoparticles to PVA matrix increased the film absorbance. Furthermore, the energy gap and refractive index of the composites were obtained from ultra violet and photoluminescence spectrophotometers. The photoluminescence spectra of ZnS:Cu/PVA nanocomposite films demonstrated a quite broad emission peak at 435 nm with highest photoluminescence intensity in nanocomposite doped with 1% Cu.

  2. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  3. The Power Plant Operating Data Based on Real-time Digital Filtration Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie

    2018-03-01

    Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.

  4. The effect of amorphous selenium detector thickness on dual-energy digital breast imaging

    PubMed Central

    Hu, Yue-Houng; Zhao, Wei

    2014-01-01

    Purpose: Contrast enhanced (CE) imaging techniques for both planar digital mammography (DM) and three-dimensional (3D) digital breast tomosynthesis (DBT) applications requires x-ray photon energies higher than the k-edge of iodine (33.2 keV). As a result, x-ray tube potentials much higher (>40 kVp) than those typical for screening mammography must be utilized. Amorphous selenium (a-Se) based direct conversion flat-panel imagers (FPI) have been widely used in DM and DBT imaging systems. The a-Se layer is typically 200 μm thick with quantum detective efficiency (QDE) >87% for x-ray energies below 26 keV. However, QDE decreases substantially above this energy. To improve the object detectability of either CE-DM or CE-DBT, it may be advantageous to increase the thickness (dSe) of the a-Se layer. Increasing the dSe will improve the detective quantum efficiency (DQE) at the higher energies used in CE imaging. However, because most DBT systems are designed with partially isocentric geometries, where the gantry moves about a stationary detector, the oblique entry of x-rays will introduce additional blur to the system. The present investigation quantifies the effect of a-Se thickness on imaging performance for both CE-DM and CE-DBT, discussing the effects of improving photon absorption and blurring from oblique entry of x-rays. Methods: In this paper, a cascaded linear system model (CLSM) was used to investigate the effect of dSe on the imaging performance (i.e., MTF, NPS, and DQE) of FPI in CE-DM and CE-DBT. The results from the model are used to calculate the ideal observer signal-to-noise ratio, d′, which is used as a figure-of-merit to determine the total effect of increasing dSe for CE-DM and CE-DBT. Results: The results of the CLSM show that increasing dSe causes a substantial increase in QDE at the high energies used in CE-DM. However, at the oblique projection angles used in DBT, the increased length of penetration through a-Se introduces additional image blur. The reduced MTF and DQE at high spatial frequencies lead to reduced two-dimensional d′. These losses in projection image resolution may subsequently result in a decrease in the 3D d′, but the degree of which is largely dependent on the DBT reconstruction algorithm. For a filtered backprojection (FBP) algorithm with spectral apodization and slice-thickness filters, which dominate the blur for reconstructed images at oblique angles, the effect of oblique entry of x-rays on 3D d′ is minimal. Thus, increasing dSe results in an improvement in d′ for both CE-DM and CE-DBT with typical FBP reconstruction parameters. Conclusions: Increased dSe improves CE breast imaging performance by increasing QDE of detectors at higher energies, e.g., 49 kVp. Although there is additional blur in the oblique angled projections of a DBT scan, the overall 3D d′ for DBT is not degraded because the dominant source blur at these angles results from the reconstruction filters of the employed FBP algorithm. PMID:25370637

  5. The effect of amorphous selenium detector thickness on dual-energy digital breast imaging.

    PubMed

    Hu, Yue-Houng; Zhao, Wei

    2014-11-01

    Contrast enhanced (CE) imaging techniques for both planar digital mammography (DM) and three-dimensional (3D) digital breast tomosynthesis (DBT) applications requires x-ray photon energies higher than the k-edge of iodine (33.2 keV). As a result, x-ray tube potentials much higher (>40 kVp) than those typical for screening mammography must be utilized. Amorphous selenium (a-Se) based direct conversion flat-panel imagers (FPI) have been widely used in DM and DBT imaging systems. The a-Se layer is typically 200 μm thick with quantum detective efficiency (QDE) >87% for x-ray energies below 26 keV. However, QDE decreases substantially above this energy. To improve the object detectability of either CE-DM or CE-DBT, it may be advantageous to increase the thickness (dSe) of the a-Se layer. Increasing the dSe will improve the detective quantum efficiency (DQE) at the higher energies used in CE imaging. However, because most DBT systems are designed with partially isocentric geometries, where the gantry moves about a stationary detector, the oblique entry of x-rays will introduce additional blur to the system. The present investigation quantifies the effect of a-Se thickness on imaging performance for both CE-DM and CE-DBT, discussing the effects of improving photon absorption and blurring from oblique entry of x-rays. In this paper, a cascaded linear system model (CLSM) was used to investigate the effect of dSe on the imaging performance (i.e., MTF, NPS, and DQE) of FPI in CE-DM and CE-DBT. The results from the model are used to calculate the ideal observer signal-to-noise ratio, d', which is used as a figure-of-merit to determine the total effect of increasing dSe for CE-DM and CE-DBT. The results of the CLSM show that increasing dSe causes a substantial increase in QDE at the high energies used in CE-DM. However, at the oblique projection angles used in DBT, the increased length of penetration through a-Se introduces additional image blur. The reduced MTF and DQE at high spatial frequencies lead to reduced two-dimensional d'. These losses in projection image resolution may subsequently result in a decrease in the 3D d', but the degree of which is largely dependent on the DBT reconstruction algorithm. For a filtered backprojection (FBP) algorithm with spectral apodization and slice-thickness filters, which dominate the blur for reconstructed images at oblique angles, the effect of oblique entry of x-rays on 3D d' is minimal. Thus, increasing dSe results in an improvement in d' for both CE-DM and CE-DBT with typical FBP reconstruction parameters. Increased dSe improves CE breast imaging performance by increasing QDE of detectors at higher energies, e.g., 49 kVp. Although there is additional blur in the oblique angled projections of a DBT scan, the overall 3D d' for DBT is not degraded because the dominant source blur at these angles results from the reconstruction filters of the employed FBP algorithm.

  6. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  7. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  8. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    DTIC Science & Technology

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate-based extended Kalman Filter CT...based CT-estimation algorithms [7, 13, 14]. One notable example is ECTempTM, which utilizes an extended Kalman Filter to estimate CT from...3. The extended Kalman filter mapping function variance coefficient (Ct) was computed using the following equation: = −9.1428 ×

  9. A Stabilized Sparse-Matrix U-D Square-Root Implementation of a Large-State Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Boggs, D.; Ghil, M.; Keppenne, C.

    1995-01-01

    The full nonlinear Kalman filter sequential algorithm is, in theory, well-suited to the four-dimensional data assimilation problem in large-scale atmospheric and oceanic problems. However, it was later discovered that this algorithm can be very sensitive to computer roundoff, and that results may cease to be meaningful as time advances. Implementations of a modified Kalman filter are given.

  10. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    NASA Astrophysics Data System (ADS)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  11. Floral volatile alleles can contribute to pollinator-mediated reproductive isolation in monkeyflowers (Mimulus)

    PubMed Central

    Byers, Kelsey J.R.P.; Vela, James P.; Peng, Foen; Riffell, Jeffrey A.; Bradshaw, H.D.

    2014-01-01

    Summary Pollinator-mediated reproductive isolation is a major factor in driving the diversification of flowering plants. Studies of floral traits involved in reproductive isolation have focused nearly exclusively on visual signals, such as flower color. The role of less obvious signals, such as floral scent, has been studied only recently. In particular, the genetics of floral volatiles involved in mediating differential pollinator visitation remains unknown. The bumblebee-pollinated Mimulus lewisii and hummingbird-pollinated M. cardinalis are a model system for studying reproductive isolation via pollinator preference. We have shown that these two species differ in three floral terpenoid volatiles - D-limonene, β-myrcene, and E-β-ocimene - that are attractive to bumblebee pollinators. By genetic mapping and in vitro enzyme activity analysis we demonstrate that these interspecific differences are consistent with allelic variation at two loci – LIMONENE-MYRCENE SYNTHASE (LMS) and OCIMENE SYNTHASE (OS). M. lewisii LMS (MlLMS) and OS (MlOS) are expressed most strongly in floral tissue in the last stages of floral development. M. cardinalis LMS (McLMS) is weakly expressed and has a nonsense mutation in exon 3. M. cardinalis OS (McOS) is expressed similarly to MlOS, but the encoded McOS enzyme produces no E-β-ocimene. Recapitulating the M. cardinalis phenotype by reducing the expression of MlLMS by RNAi in transgenic M. lewisii produces no behavioral difference in pollinating bumblebees; however, reducing MlOS expression produces a 6% decrease in visitation. Allelic variation at the OCIMENE SYNTHASE locus likely contributes to differential pollinator visitation, and thus promotes reproductive isolation between M. lewisii and M. cardinalis. OCIMENE SYNTHASE joins a growing list of “speciation genes” (“barrier genes”) in flowering plants. PMID:25319242

  12. Gender Difference in Academic Planning Activity among Medical Students

    PubMed Central

    Nguyen, Huy Van; Giang, Thao Thach

    2013-01-01

    Background In Vietnam, as doctor of medicine is socially considered a special career, both men and women who are enrolled in medical universities often study topics of medicine seriously. However, as culturally expected, women often perform better than men. Because of this, teaching leadership and management skill (LMS) to develop academic planning activity (APA) for female medical students would also be expected to be more effective than male counterparts. This research aimed to compare by gender the effect of teaching LMS on increasing APA, using propensity score matching (PSM). Methods In a cross-sectional survey utilizing a self-reported structured questionnaire on a systematic random sample of 421 male and female medical students in Hanoi Medical University, this study adopted first regression techniques to construct a fit model, then PSM to create a matched control group in order to allow for evaluating the effect of LMS education. Results There were several interesting gender differences. First, while for females LMS education had both direct and indirect effects on APA, it had only direct effect on males’ APA. Second, after PSM to adjust for the possible confounders to balance statistically two groups – with and without LMS education, there is statistically a significant difference in APA between male and female students, making a net difference of 11% (p<.01), equivalent to 173 students. The difference in APA between exposed and matched control group in males and females was 9% and 20%, respectively. These estimates of 9.0 and 20.0 percentage point increase can be translated into the practice of APA by 142 males and 315 females, respectively, in the population. These numbers of APA among male and female students can be explained by LMS education. Conclusions Gender appears to be a factor explaining in part academic planning activity. PMID:23418467

  13. Multimodal ultrasonographic assessment of leiomyosarcoma of the femoral vein in a patient misdiagnosed as having deep vein thrombosis: A case report.

    PubMed

    Zhang, Mei; Yan, Feng; Huang, Bin; Wu, Zhoupeng; Wen, Xiaorong

    2017-11-01

    Primary leiomyosarcoma (LMS) of the vein is a rare tumor that arises from the smooth muscle cells of the vessel wall and has an extremely poor prognosis. This tumor can occur in vessels such as the inferior vena cava, great saphenous vein, femoral vein, iliac vein, popliteal vein, and renal vein; the inferior vena cava is the most common site. LMS of the femoral vein can result in edema and pain in the lower extremity; therefore, it is not easy to be differentiated from deep vein thrombosis (DVT). Moreover, virtually no studies have described the ultrasonographic features of LMS of the vein in detail. We present a case of a 55-year-old woman with LMS of the left femoral vein that was misdiagnosed as having deep vein thrombosis (DVT) on initial ultrasonographic examination. The patient began to experience edema and pain in her left leg seven months previously. She was diagnosed as having DVT on initial ultrasonographic examination, but the DVT treatment that she had received for 7 months failed to improve the status of her left lower limb. She subsequently underwent re-examination by means of a multimodal ultrasonographic imaging approach (regular B-mode imaging, color Doppler imaging, pulsed-wave Doppler imaging, contrast-enhanced ultrasonography), which confirmed a diagnosis of LMS. This patient was treated successfully with surgery. This case demonstrates that use of multiple ultrasonographic imaging techniques can be helpful to diagnose LMS accurately. Detection of vasculature in a dilated vein filled with a heterogeneous hypoechoic substance on ultrasonography is a sign of a tumor. The pitfall of misdiagnosing this tumor as DVT is a useful reminder.

  14. Multimodal ultrasonographic assessment of leiomyosarcoma of the femoral vein in a patient misdiagnosed as having deep vein thrombosis

    PubMed Central

    Zhang, Mei; Yan, Feng; Huang, Bin; Wu, Zhoupeng; Wen, Xiaorong

    2017-01-01

    Abstract Rationale: Primary leiomyosarcoma (LMS) of the vein is a rare tumor that arises from the smooth muscle cells of the vessel wall and has an extremely poor prognosis. This tumor can occur in vessels such as the inferior vena cava, great saphenous vein, femoral vein, iliac vein, popliteal vein, and renal vein; the inferior vena cava is the most common site. LMS of the femoral vein can result in edema and pain in the lower extremity; therefore, it is not easy to be differentiated from deep vein thrombosis (DVT). Moreover, virtually no studies have described the ultrasonographic features of LMS of the vein in detail. Patient concerns: We present a case of a 55-year-old woman with LMS of the left femoral vein that was misdiagnosed as having deep vein thrombosis (DVT) on initial ultrasonographic examination. The patient began to experience edema and pain in her left leg seven months previously. She was diagnosed as having DVT on initial ultrasonographic examination, but the DVT treatment that she had received for 7 months failed to improve the status of her left lower limb. Diagnoses: She subsequently underwent re-examination by means of a multimodal ultrasonographic imaging approach (regular B-mode imaging, color Doppler imaging, pulsed-wave Doppler imaging, contrast-enhanced ultrasonography), which confirmed a diagnosis of LMS. Interventions: This patient was treated successfully with surgery. Outcomes: This case demonstrates that use of multiple ultrasonographic imaging techniques can be helpful to diagnose LMS accurately. Detection of vasculature in a dilated vein filled with a heterogeneous hypoechoic substance on ultrasonography is a sign of a tumor. Lessons: The pitfall of misdiagnosing this tumor as DVT is a useful reminder. PMID:29145269

  15. Impact of Starting Point and Bicortical Purchase of C1 Lateral Mass Screws on Atlantoaxial Fusion: Meta-Analysis and Review of the Literature.

    PubMed

    Elliott, Robert E; Tanweer, Omar; Smith, Michael L; Frempong-Boadu, Anthony

    2015-08-01

    Structured review of literature and application of meta-analysis statistical techniques. Review published series describing clinical and radiographic outcomes of patients treated with C1 lateral mass screws (C1LMS), specifically analyzing the impact of starting point and bicortical purchase on successful atlantoaxial arthrodesis. Biomechanical studies suggest posterior arch screws and C1LMS with bicortical purchase are stronger than screws placed within the center of the lateral mass or those with unicortical purchase. Online databases were searched for English-language articles between 1994 and 2012 describing posterior atlantal instrumentation with C1LMS. Thirty-four studies describing 1247 patients having posterior atlantoaxial fusion with C1LMS met inclusion criteria. All studies provided class III evidence. Arthrodesis was quite successful regardless of technique (99.0% overall). Meta-analysis and multivariate regression analyses showed that neither posterior arch starting point nor bicortical screw purchase translated into a higher rate of successful arthrodesis. There were no complications from bicortical screw purchase. The Goel-Harms technique is a very safe and successful technique for achieving atlantoaxial fusion, regardless of minor variations in C1LMS technique. Although biomechanical studies suggest markedly increased rigidity of bicortical and posterior arch C1LMS, the significance of these findings may be minimal in the clinical setting of atlantoaxial fixation and fusion with modern techniques. The decision to use either technique must be made after careful review of the preoperative multiplanar computed tomography imaging, assessment of the unique anatomy of each patient, and the demands of the clinical scenario such as bone quality.

  16. Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid

    NASA Astrophysics Data System (ADS)

    Hoomod, Haider K.; Radi, Mohammed

    2018-05-01

    The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).

  17. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.

  18. Multimodal medical image fusion by combining gradient minimization smoothing filter and non-subsampled directional filter bank

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang

    2018-04-01

    A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.

  19. X-ray metal film filters at cryogenic temperatures

    NASA Technical Reports Server (NTRS)

    Keski-Kuha, Ritva A. M.

    1989-01-01

    Thin aluminum foil filters have been evaluated at cryogenic temperatures. The results of the test program, including cold cycling and vibration testing, indicate that these filters are fully successful at cryogenic temperatures and can provide the high X-ray transmittance and high background rejection required for the blocking filters which are being developed for the X-Ray Spectrometer, one of the focal plane instruments on the Advanced X-Ray Astrophysics Facility.

  20. The influence of digital filter type, amplitude normalisation method, and co-contraction algorithm on clinically relevant surface electromyography data during clinical movement assessments.

    PubMed

    Devaprakash, Daniel; Weir, Gillian J; Dunne, James J; Alderson, Jacqueline A; Donnelly, Cyril J

    2016-12-01

    There is a large and growing body of surface electromyography (sEMG) research using laboratory-specific signal processing procedures (i.e., digital filter type and amplitude normalisation protocols) and data analyses methods (i.e., co-contraction algorithms) to acquire practically meaningful information from these data. As a result, the ability to compare sEMG results between studies is, and continues to be challenging. The aim of this study was to determine if digital filter type, amplitude normalisation method, and co-contraction algorithm could influence the practical or clinical interpretation of processed sEMG data. Sixteen elite female athletes were recruited. During data collection, sEMG data was recorded from nine lower limb muscles while completing a series of calibration and clinical movement assessment trials (running and sidestepping). Three analyses were conducted: (1) signal processing with two different digital filter types (Butterworth or critically damped), (2) three amplitude normalisation methods, and (3) three co-contraction ratio algorithms. Results showed the choice of digital filter did not influence the clinical interpretation of sEMG; however, choice of amplitude normalisation method and co-contraction algorithm did influence the clinical interpretation of the running and sidestepping task. Care is recommended when choosing amplitude normalisation method and co-contraction algorithms if researchers/clinicians are interested in comparing sEMG data between studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top