NASA Astrophysics Data System (ADS)
Bindiya T., S.; Elias, Elizabeth
2015-01-01
In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.
Design Techniques for Uniform-DFT, Linear Phase Filter Banks
NASA Technical Reports Server (NTRS)
Sun, Honglin; DeLeon, Phillip
1999-01-01
Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.
Extracting tissue deformation using Gabor filter banks
NASA Astrophysics Data System (ADS)
Montillo, Albert; Metaxas, Dimitris; Axel, Leon
2004-04-01
This paper presents a new approach for accurate extraction of tissue deformation imaged with tagged MR. Our method, based on banks of Gabor filters, adjusts (1) the aspect and (2) orientation of the filter"s envelope and adjusts (3) the radial frequency and (4) angle of the filter"s sinusoidal grating to extract information about the deformation of tissue. The method accurately extracts tag line spacing, orientation, displacement and effective contrast. Existing, non-adaptive methods often fail to recover useful displacement information in the proximity of tissue boundaries while our method works in the proximity of the boundaries. We also present an interpolation method to recover all tag information at a finer resolution than the filter bank parameters. Results are shown on simulated images of translating and contracting tissue.
Zhang, Lijia; Liu, Bo; Xin, Xiangjun
2015-06-15
A secure optical generalized filter bank multi-carrier (GFBMC) system with carrier-less amplitude-phase (CAP) modulation is proposed in this Letter. The security is realized through cubic constellation-masked method. Large key space and more flexibility masking can be obtained by cubic constellation masking aligning with the filter bank. An experiment of 18 Gb/s encrypted GFBMC/CAP system with 25-km single-mode fiber transmission is performed to demonstrate the feasibility of the proposed method.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.
Kalathil, Shaeen; Elias, Elizabeth
2015-11-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Kalathil, Shaeen; Elias, Elizabeth
2014-01-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921
Optimal design of FIR triplet halfband filter bank and application in image coding.
Kha, H H; Tuan, H D; Nguyen, T Q
2011-02-01
This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
Channel Estimation for Filter Bank Multicarrier Systems in Low SNR Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driggs, Jonathan; Sibbett, Taylor; Moradiy, Hussein
Channel estimation techniques are crucial for reliable communications. This paper is concerned with channel estimation in a filter bank multicarrier spread spectrum (FBMCSS) system. We explore two channel estimator options: (i) a method that makes use of a periodic preamble and mimics the channel estimation techniques that are widely used in OFDM-based systems; and (ii) a method that stays within the traditional realm of filter bank signal processing. For the case where the channel noise is white, both methods are analyzed in detail and their performance is compared against their respective Cramer-Rao Lower Bounds (CRLB). Advantages and disadvantages of themore » two methods under different channel conditions are given to provide insight to the reader as to when one will outperform the other.« less
Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals
Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A
2014-10-14
A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.
Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals
Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A
2014-05-20
A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
Methods and apparatuses using filter banks for multi-carrier spread spectrum signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A
2017-01-31
A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less
Methods and apparatuses using filter banks for multi-carrier spread spectrum signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A.
2016-06-14
A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less
Park, Sang-Hoon; Lee, David; Lee, Sang-Goog
2018-02-01
For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.
Exact reconstruction analysis/synthesis filter banks with time-varying filters
NASA Technical Reports Server (NTRS)
Arrowood, J. L., Jr.; Smith, M. J. T.
1993-01-01
This paper examines some of the analysis/synthesis issues associated with FIR time-varying filter banks where the filter bank coefficients are allowed to change in response to the input signal. Several issues are identified as being important in order to realize performance gains from time-varying filter banks in image coding applications. These issues relate to the behavior of the filters as transition from one set of filter banks to another occurs. Lattice structure formulations for the time varying filter bank problem are introduced and discussed in terms of their properties and transition characteristics.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.
Application of DFT Filter Banks and Cosine Modulated Filter Banks in Filtering
NASA Technical Reports Server (NTRS)
Lin, Yuan-Pei; Vaidyanathan, P. P.
1994-01-01
None given. This is a proposal for a paper to be presented at APCCAS '94 in Taipei, Taiwan. (From outline): This work is organized as follows: Sec. II is devoted to the construction of the new 2m channel under-decimated DFT filter bank. Implementation and complexity of this DFT filter bank are discussed therein. IN a similar manner, the new 2m channel cosine modulated filter bank is discussed in Sec. III. Design examples are given in Sec. IV.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
Design of almost symmetric orthogonal wavelet filter bank via direct optimization.
Murugesan, Selvaraaju; Tay, David B H
2012-05-01
It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.
Enhanced visualization of abnormalities in digital-mammographic images
NASA Astrophysics Data System (ADS)
Young, Susan S.; Moore, William E.
2002-05-01
This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.
Extended Kalman filtering for the detection of damage in linear mechanical structures
NASA Astrophysics Data System (ADS)
Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.
2009-09-01
This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.
Real-Time Diagnosis of Faults Using a Bank of Kalman Filters
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2006-01-01
A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.
An analysis of the multiple model adaptive control algorithm. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Greene, C. S.
1978-01-01
Qualitative and quantitative aspects of the multiple model adaptive control method are detailed. The method represents a cascade of something which resembles a maximum a posteriori probability identifier (basically a bank of Kalman filters) and a bank of linear quadratic regulators. Major qualitative properties of the MMAC method are examined and principle reasons for unacceptable behavior are explored.
NASA Astrophysics Data System (ADS)
Jeong, Jeong-Won; Kim, Tae-Seong; Shin, Dae-Chul; Do, Synho; Marmarelis, Vasilis Z.
2004-04-01
Recently it was shown that soft tissue can be differentiated with spectral unmixing and detection methods that utilize multi-band information obtained from a High-Resolution Ultrasonic Transmission Tomography (HUTT) system. In this study, we focus on tissue differentiation using the spectral target detection method based on Constrained Energy Minimization (CEM). We have developed a new tissue differentiation method called "CEM filter bank". Statistical inference on the output of each CEM filter of a filter bank is used to make a decision based on the maximum statistical significance rather than the magnitude of each CEM filter output. We validate this method through 3-D inter/intra-phantom soft tissue classification where target profiles obtained from an arbitrary single slice are used for differentiation in multiple tomographic slices. Also spectral coherence between target and object profiles of an identical tissue at different slices and phantoms is evaluated by conventional cross-correlation analysis. The performance of the proposed classifier is assessed using Receiver Operating Characteristic (ROC) analysis. Finally we apply our method to classify tiny structures inside a beef kidney such as Styrofoam balls (~1mm), chicken tissue (~5mm), and vessel-duct structures.
Orthonormal filters for identification in active control systems
NASA Astrophysics Data System (ADS)
Mayer, Dirk
2015-12-01
Many active noise and vibration control systems require models of the control paths. When the controlled system changes slightly over time, adaptive digital filters for the identification of the models are useful. This paper aims at the investigation of a special class of adaptive digital filters: orthonormal filter banks possess the robust and simple adaptation of the widely applied finite impulse response (FIR) filters, but at a lower model order, which is important when considering implementation on embedded systems. However, the filter banks require prior knowledge about the resonance frequencies and damping of the structure. This knowledge can be supposed to be of limited precision, since in many practical systems, uncertainties in the structural parameters exist. In this work, a procedure using a number of training systems to find the fixed parameters for the filter banks is applied. The effect of uncertainties in the prior knowledge on the model error is examined both with a basic example and in an experiment. Furthermore, the possibilities to compensate for the imprecise prior knowledge by a higher filter order are investigated. Also comparisons with FIR filters are implemented in order to assess the possible advantages of the orthonormal filter banks. Numerical and experimental investigations show that significantly lower computational effort can be reached by the filter banks under certain conditions.
Reducing the complexity of the CCSDS standard for image compression decreasing the DWT filter order
NASA Astrophysics Data System (ADS)
Ito, Leandro H.; Pinho, Marcelo S.
2014-10-01
The goal for this work is to evaluate the impact of utilizing shorter wavelet filters in the CCSDS standard for lossy and lossless image compression. Another constraint considered was the existence of symmetry in the filters. That approach was desired to maintain the symmetric extension compatibility of the filter banks. Even though this strategy works well for oat wavelets, it is not always the case for their integer approximations. The periodic extension was utilized whenever symmetric extension was not applicable. Even though the latter outperforms the former, for fair comparison the symmetric extension compatible integer-to-integer wavelet approximations were evaluated under both extensions. The evaluation methods adopted were bit rate (bpp), PSNR and the number of operations required by each wavelet transforms. All these results were compared against the ones obtained utilizing the standard CCSDS with 9/7 filter banks, for lossy and lossless compression. The tests were performed over tallies (512x512) of raw remote sensing images from CBERS-2B (China-Brazil Earth Resources Satellites) captured from its high resolution CCD camera. These images were cordially made available by INPE (National Institute for Space Research) in Brazil. For the CCSDS implementation, it was utilized the source code developed by Hongqiang Wang from the Electrical Department at Nebraska-Lincoln University, applying the appropriate changes on the wavelet transform. For lossy compression, the results have shown that the filter bank built from the Deslauriers-Dubuc scaling function, with respectively 2 and 4 vanishing moments on the synthesis and analysis banks, presented not only a reduction of 21% in the number of operations required, but also a performance on par with the 9/7 filter bank. In the lossless case, the biorthogonal Cohen-Daubechies-Feauveau with 2 vanishing moments presented a performance close to the 9/7 integer approximation of the CCSDS, with the number of operations reduced by 1/3.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
A novel filter bank for biotelemetry.
Karagözoglu, B
2001-03-01
In a multichannel biotelemetry system, signals taken from a patient are distributed along the available frequency range (bandwidth) of the system through frequency-division-multiplexing, and combined into a single composite signal. Biological signals that are limited to low frequencies (below 10 Hz) modulate the frequencies of respective sub-carriers. Other biological signals are carried in amplitude-modulated forms. It is recognized that recovering original signals from a composite signal at the receiver side is a technical challenge when a telemetry system with narrow bandwidth capacity is used, since such a system leaves little frequency spacing between information channels. A filter bank is therefore utilized for recovering biological signals that are transmitted. The filter bank contains filter units comprising switched-capacitor filter integrated circuits. The filters have two distinct and opposing outputs (band-stop (notch) and band-pass). Since most biological signals are at low frequencies, and modulated signals occupy a narrow band around the carrier, notch filters can be used to efficiently stop signals in the narrow frequency range. Once the interim channels are removed, other channels become well separated from each other, and band-pass filters can select them. In the proposed system, efficient filtering of closely packed channels is achieved, with low interference, from neighboring channels. The filter bank is applied to a system that carries four biological signals and a battery status indicator signal. Experimental results reinforce theoretical predictions that the filter bank successfully de-multiplexes closely packed information channels with low crosstalk between them. It is concluded that the proposed filter bank allows utilization of cost-effective multichannel biotelemetry systems that are designed around commercial audio devices, and that it can be readily adapted to a broad range of physiological recording requirements.
Elaborate analysis and design of filter-bank-based sensing for wideband cognitive radios
NASA Astrophysics Data System (ADS)
Maliatsos, Konstantinos; Adamis, Athanasios; Kanatas, Athanasios G.
2014-12-01
The successful operation of a cognitive radio system strongly depends on its ability to sense the radio environment. With the use of spectrum sensing algorithms, the cognitive radio is required to detect co-existing licensed primary transmissions and to protect them from interference. This paper focuses on filter-bank-based sensing and provides a solid theoretical background for the design of these detectors. Optimum detectors based on the Neyman-Pearson theorem are developed for uniform discrete Fourier transform (DFT) and modified DFT filter banks with root-Nyquist filters. The proposed sensing framework does not require frequency alignment between the filter bank of the sensor and the primary signal. Each wideband primary channel is spanned and monitored by several sensor subchannels that analyse it in narrowband signals. Filter-bank-based sensing is proved to be robust and efficient under coloured noise. Moreover, the performance of the weighted energy detector as a sensing technique is evaluated. Finally, based on the Locally Most Powerful and the Generalized Likelihood Ratio test, real-world sensing algorithms that do not require a priori knowledge are proposed and tested.
Correlation Filter Synthesis Using Neural Networks.
1993-12-01
trained neural networks may be understood as "smart" data interpolators, the stored filter and the filter synthesis approaches have much in common: in...the former new filters are found by searching a data bank consisting of the filters themselves; in the latter filters are formed from a distributed... data bank that contains neural network interaction strengths or weights. 1.2 Key Results and Outputs Excellent computer simulation results were
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
NASA Astrophysics Data System (ADS)
Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong
2015-08-01
Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.
Chowdhury, Debbrota Paul; Bakshi, Sambit; Guo, Guodong; Sa, Pankaj Kumar
2017-11-27
In this paper, an overall framework has been presented for person verification using ear biometric which uses tunable filter bank as local feature extractor. The tunable filter bank, based on a half-band polynomial of 14th order, extracts distinct features from ear images maintaining its frequency selectivity property. To advocate the applicability of tunable filter bank on ear biometrics, recognition test has been performed on available constrained databases like AMI, WPUT, IITD and unconstrained database like UERC. Experiments have been conducted applying tunable filter based feature extractor on subparts of the ear. Empirical experiments have been conducted with four and six subdivisions of the ear image. Analyzing the experimental results, it has been found that tunable filter moderately succeeds to distinguish ear features at par with the state-of-the-art features used for ear recognition. Accuracies of 70.58%, 67.01%, 81.98%, and 57.75% have been achieved on AMI, WPUT, IITD, and UERC databases through considering Canberra Distance as underlying measure of separation. The performances indicate that tunable filter is a candidate for recognizing human from ear images.
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
System and method for detection of dispersed broadband signals
Qian, S.; Dunham, M.E.
1999-06-08
A system and method for detecting the presence of dispersed broadband signals in real time are disclosed. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos[l brace]2[phi](t)[r brace]. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase [phi](t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of [phi][prime](t). 10 figs.
System and method for detection of dispersed broadband signals
Qian, Shie; Dunham, Mark E.
1999-06-08
A system and method for detecting the presence of dispersed broadband signals in real time. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos{2.phi.(t)}. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase .phi.(t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of .phi.'(t).
Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns
Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang
2014-01-01
Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723
Online frequency estimation with applications to engine and generator sets
NASA Astrophysics Data System (ADS)
Manngård, Mikael; Böling, Jari M.
2017-07-01
Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.
Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.
Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish
2006-01-01
Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.
Schädler, Marc René; Kollmeier, Birger
2015-04-01
To test if simultaneous spectral and temporal processing is required to extract robust features for automatic speech recognition (ASR), the robust spectro-temporal two-dimensional-Gabor filter bank (GBFB) front-end from Schädler, Meyer, and Kollmeier [J. Acoust. Soc. Am. 131, 4134-4151 (2012)] was de-composed into a spectral one-dimensional-Gabor filter bank and a temporal one-dimensional-Gabor filter bank. A feature set that is extracted with these separate spectral and temporal modulation filter banks was introduced, the separate Gabor filter bank (SGBFB) features, and evaluated on the CHiME (Computational Hearing in Multisource Environments) keywords-in-noise recognition task. From the perspective of robust ASR, the results showed that spectral and temporal processing can be performed independently and are not required to interact with each other. Using SGBFB features permitted the signal-to-noise ratio (SNR) to be lowered by 1.2 dB while still performing as well as the GBFB-based reference system, which corresponds to a relative improvement of the word error rate by 12.8%. Additionally, the real time factor of the spectro-temporal processing could be reduced by more than an order of magnitude. Compared to human listeners, the SNR needed to be 13 dB higher when using Mel-frequency cepstral coefficient features, 11 dB higher when using GBFB features, and 9 dB higher when using SGBFB features to achieve the same recognition performance.
Bank of Weight Filters for Deep CNNs
2016-11-22
JMLR: Workshop and Conference Proceedings 63:334–349, 2016 ACML 2016 Bank of Weight Filters for Deep CNNs Suresh Kirthi Kumaraswamy suresh.kirthi...erating filters whose weights are learnt. However, this entails learning millions of weights (across different layers) and hence learning times are...reused on another task (by some finetuning). In this context, this paper presents a systematic study of the exchangeability of weight filters of CNNs
CCD filter and transform techniques for interference excision
NASA Technical Reports Server (NTRS)
Borsuk, G. M.; Dewitt, R. N.
1976-01-01
The theoretical and some experimental results of a study aimed at applying CCD filter and transform techniques to the problem of interference excision within communications channels were presented. Adaptive noise (interference) suppression was achieved by the modification of received signals such that they were orthogonal to the recently measured noise field. CCD techniques were examined to develop real-time noise excision processing. They were recursive filters, circulating filter banks, transversal filter banks, an optical implementation of the chirp Z transform, and a CCD analog FFT.
Asymptotic Cramer-Rao bounds for Morlet wavelet filter bank transforms of FM signals
NASA Astrophysics Data System (ADS)
Scheper, Richard
2002-03-01
Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.
A generalized transmultiplexer and its application to mobile satellite communications
NASA Technical Reports Server (NTRS)
Ichiyoshi, Osamu
1990-01-01
A generalization of digital transmultiplexer technology is presented. The proposed method can realize transmultiplexer (TMUX) and transdemultiplexer (TDUX) filter banks whose element filters have bandwidths greater than the channel spacing frequency. This feature is useful in many communications applications. As an example, a satellite switched (SS) Frequency Division Multiple Access (FDMA) system is proposed for spot beam satellite communications, particularly for mobile satellite communications.
Optoelectronic image scanning with high spatial resolution and reconstruction fidelity
NASA Astrophysics Data System (ADS)
Craubner, Siegfried I.
2002-02-01
In imaging systems the detector arrays deliver at the output time-discrete signals, where the spatial frequencies of the object scene are mapped into the electrical signal frequencies. Since the spatial frequency spectrum cannot be bandlimited by the front optics, the usual detector arrays perform a spatial undersampling and as a consequence aliasing occurs. A means to partially suppress the backfolded alias band is bandwidth limitation in the reconstruction low-pass, at the price of resolution loss. By utilizing a bilinear detector array in a pushbroom-type scanner, undersampling and aliasing can be overcome. For modeling the perception, the theory of discrete systems and multirate digital filter banks is applied, where aliasing cancellation and perfect reconstruction play an important role. The discrete transfer function of a bilinear array can be imbedded into the scheme of a second-order filter bank. The detector arrays already build the analysis bank and the overall filter bank is completed with the synthesis bank, for which stabilized inverse filters are proposed, to compensate for the low-pass characteristics and to approximate perfect reconstruction. The synthesis filter branch can be realized in a so-called `direct form,' or the `polyphase form,' where the latter is an expenditure-optimal solution, which gives advantages when implemented in a signal processor. This paper attempts to introduce well-established concepts of the theory of multirate filter banks into the analysis of scanning imagers, which is applicable in a much broader sense than for the problems addressed here. To the author's knowledge this is also a novelty.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
A Divergence Median-based Geometric Detector with A Weighted Averaging Filter
NASA Astrophysics Data System (ADS)
Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang
2018-01-01
To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Gulkis, S.
1991-01-01
The sensitivity of a matched filter-detection system to a finite-duration continuous wave (CW) tone is compared with the sensitivities of a windowed discrete Fourier transform (DFT) system and an ideal bandpass filter-bank system. These comparisons are made in the context of the NASA Search for Extraterrestrial Intelligence (SETI) microwave observing project (MOP) sky survey. A review of the theory of polyphase-DFT filter banks and its relationship to the well-known windowed-DFT process is presented. The polyphase-DFT system approximates the ideal bandpass filter bank by using as few as eight filter taps per polyphase branch. An improvement in sensitivity of approx. 3 dB over a windowed-DFT system can be obtained by using the polyphase-DFT approach. Sidelobe rejection of the polyphase-DFT system is vastly superior to the windowed-DFT system, thereby improving its performance in the presence of radio frequency interference (RFI).
NASA Astrophysics Data System (ADS)
Roy, Soumen; Sengupta, Anand S.; Thakor, Nilay
2017-05-01
Astrophysical compact binary systems consisting of neutron stars and black holes are an important class of gravitational wave (GW) sources for advanced LIGO detectors. Accurate theoretical waveform models from the inspiral, merger, and ringdown phases of such systems are used to filter detector data under the template-based matched-filtering paradigm. An efficient grid over the parameter space at a fixed minimal match has a direct impact on the overall time taken by these searches. We present a new hybrid geometric-random template placement algorithm for signals described by parameters of two masses and one spin magnitude. Such template banks could potentially be used in GW searches from binary neutron stars and neutron star-black hole systems. The template placement is robust and is able to automatically accommodate curvature and boundary effects with no fine-tuning. We also compare these banks against vanilla stochastic template banks and show that while both are equally efficient in the fitting-factor sense, the bank sizes are ˜25 % larger in the stochastic method. Further, we show that the generation of the proposed hybrid banks can be sped up by nearly an order of magnitude over the stochastic bank. Generic issues related to optimal implementation are discussed in detail. These improvements are expected to directly reduce the computational cost of gravitational wave searches.
A human auditory tuning curves matched wavelet function.
Abolhassani, Mohammad D; Salimpour, Yousef
2008-01-01
This paper proposes a new quantitative approach to the problem of matching a wavelet function to a human auditory tuning curves. The auditory filter shapes were derived from the psychophysical measurements in normal-hearing listeners using the variant of the notched-noise method for brief signals in forward and simultaneous masking. These filters were used as templates for the designing a wavelet function that has the maximum matching to a tuning curve. The scaling function was calculated from the matched wavelet function and by using these functions, low pass and high pass filters were derived for the implementation of a filter bank. Therefore, new wavelet families were derived.
Reception of Multiple Telemetry Signals via One Dish Antenna
NASA Technical Reports Server (NTRS)
Mukai, Ryan; Vilnrotter, Victor
2010-01-01
A microwave aeronautical-telemetry receiver system includes an antenna comprising a seven-element planar array of receiving feed horns centered at the focal point of a paraboloidal dish reflector that is nominally aimed at a single aircraft or at multiple aircraft flying in formation. Through digital processing of the signals received by the seven feed horns, the system implements a method of enhanced cancellation of interference, such that it becomes possible to receive telemetry signals in the same frequency channel simultaneously from either or both of two aircraft at slightly different angular positions within the field of view of the antenna, even in the presence of multipath propagation. The present system is an advanced version of the system described in Spatio- Temporal Equalizer for a Receiving-Antenna Feed Array NPO-43077, NASA Tech Briefs, Vol. 34, No. 2 (February 2010), page 32. To recapitulate: The radio-frequency telemetry signals received by the seven elements of the array are digitized, converted to complex baseband form, and sent to a spatio-temporal equalizer that consists mostly of a bank of seven adaptive finite-impulse-response (FIR) filters (one for each element in the array) plus a unit that sums the outputs of the filters. The combination of the spatial diversity of the feedhorn array and the temporal diversity of the filter bank affords better multipath suppression performance than is achievable by means of temporal equalization alone. The FIR filter bank adapts itself in real time to enable reception of telemetry at a low bit error rate, even in the presence of frequency-selective multipath propagation like that commonly found at flight-test ranges. The combination of the array and the filter bank makes it possible to constructively add multipath incoming signals to the corresponding directly arriving signals, thereby enabling reductions in telemetry bit-error rates.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Wavelet filter analysis of local atmospheric pressure effects in the long-period tidal bands
NASA Astrophysics Data System (ADS)
Hu, X.-G.; Liu, L. T.; Ducarme, B.; Hsu, H. T.; Sun, H.-P.
2006-11-01
It is well known that local atmospheric pressure variations obviously affect the observation of short-period Earth tides, such as diurnal tides, semi-diurnal tides and ter-diurnal tides, but local atmospheric pressure effects on the long-period Earth tides have not been studied in detail. This is because the local atmospheric pressure is believed not to be sufficient for an effective pressure correction in long-period tidal bands, and there are no efficient methods to investigate local atmospheric effects in these bands. The usual tidal analysis software package, such as ETERNA, Baytap-G and VAV, cannot provide detailed pressure admittances for long-period tidal bands. We propose a wavelet method to investigate local atmospheric effects on gravity variations in long-period tidal bands. This method constructs efficient orthogonal filter bank with Daubechies wavelets of high vanishing moments. The main advantage of the wavelet filter bank is that it has excellent low frequency response and efficiently suppresses instrumental drift of superconducting gravimeters (SGs) without using any mathematical model. Applying the wavelet method to the 13-year continuous gravity observations from SG T003 in Brussels, Belgium, we filtered 12 long-period tidal groups into eight narrow frequency bands. Wavelet method demonstrates that local atmospheric pressure fluctuations are highly correlated with the noise of SG measurements in the period band 4-40 days with correlation coefficients higher than 0.95 and local atmospheric pressure variations are the main error source for the determination of the tidal parameters in these bands. We show the significant improvement of long-period tidal parameters provided by wavelet method in term of precision.
NASA Astrophysics Data System (ADS)
Liang, Ruiyu; Xi, Ji; Bao, Yongqiang
2017-07-01
To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906
Application of wavelet-based multi-model Kalman filters to real-time flood forecasting
NASA Astrophysics Data System (ADS)
Chou, Chien-Ming; Wang, Ru-Yih
2004-04-01
This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2012-01-01
This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.
Smooth affine shear tight frames: digitization and applications
NASA Astrophysics Data System (ADS)
Zhuang, Xiaosheng
2015-08-01
In this paper, we mainly discuss one of the recent developed directional multiscale representation systems: smooth affine shear tight frames. A directional wavelet tight frame is generated by isotropic dilations and translations of directional wavelet generators, while an affine shear tight frame is generated by anisotropic dilations, shears, and translations of shearlet generators. These two tight frames are actually connected in the sense that the affine shear tight frame can be obtained from a directional wavelet tight frame through subsampling. Consequently, an affine shear tight frame indeed has an underlying filter bank from the MRA structure of its associated directional wavelet tight frame. We call such filter banks affine shear filter banks, which can be designed completely in the frequency domain. We discuss the digitization of affine shear filter banks and their implementations: the forward and backward digital affine shear transforms. Redundancy rate and computational complexity of digital affine shear transforms are also investigated in this paper. Numerical experiments and comparisons in image/video processing show the advantages of digital affine shear transforms over many other state-of-art directional multiscale representation systems.
Classification of calcium in intravascular OCT images for the purpose of intervention planning
NASA Astrophysics Data System (ADS)
Shalev, Ronny; Bezerra, Hiram G.; Ray, Soumya; Prabhu, David; Wilson, David L.
2016-03-01
The presence of extensive calcification is a primary concern when planning and implementing a vascular percutaneous intervention such as stenting. If the balloon does not expand, the interventionalist must blindly apply high balloon pressure, use an atherectomy device, or abort the procedure. As part of a project to determine the ability of Intravascular Optical Coherence Tomography (IVOCT) to aid intervention planning, we developed a method for automatic classification of calcium in coronary IVOCT images. We developed an approach where plaque texture is modeled by the joint probability distribution of a bank of filter responses where the filter bank was chosen to reflect the qualitative characteristics of the calcium. This distribution is represented by the frequency histogram of filter response cluster centers. The trained algorithm was evaluated on independent ex-vivo image data accurately labeled using registered 3D microscopic cryo-image data which was used as ground truth. In this study, regions for extraction of sub-images (SI's) were selected by experts to include calcium, fibrous, or lipid tissues. We manually optimized algorithm parameters such as choice of filter bank, size of the dictionary, etc. Splitting samples into training and testing data, we achieved 5-fold cross validation calcium classification with F1 score of 93.7+/-2.7% with recall of >=89% and a precision of >=97% in this scenario with admittedly selective data. The automated algorithm performed in close-to-real-time (2.6 seconds per frame) suggesting possible on-line use. This promising preliminary study indicates that computational IVOCT might automatically identify calcium in IVOCT coronary artery images.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2004-01-01
In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.
Parallel Processing of Broad-Band PPM Signals
NASA Technical Reports Server (NTRS)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
2010-01-01
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
2013-01-01
adsorbed on wet carbon (13 wt% water ). Left to right: initial and t = 6, 13, and 16 days ..............................3 2. 31 P MAS NMR spectra...obtained for 10 wt% VX adsorbed on wet carbon (13 wt% water ) Left to right: initial and t = 24 days ...............................................4...of feed air. Each Class A Type II filter contained approximately 48.2 lb of granular, activated, coconut shell-based carbon. A given filter bank
NASA Astrophysics Data System (ADS)
Lhamon, Michael Earl
A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.
Multidimensional signaling via wavelet packets
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.
1995-04-01
This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
NASA Astrophysics Data System (ADS)
Schleibinger, Hans; Rüden, Henning
The emission of volatile organic compounds (VOC) from air filters of HVAC systems was to be evaluated. In a first study carbonyl compounds (14 aldehydes and two ketones) were measured by reacting them with 2,4-dinitrophenylhydrazine (DNPH). Analysis was done by HPLC and UV detection. In laboratory experiments pieces of used and unused HVAC filters were incubated in test chambers. Filters to be investigated were taken from a filter bank of a large HVAC system in the centre of Berlin. First results show that - among those compounds - formaldehyde and acetone were found in higher concentrations in the test chambers filled with used filters in comparison to those with unused filters. Parallel field measurements were carried out at the prefilter and main filter banks of the two HVAC systems. Here measurements were carried out simultaneously before and after the filters to investigate whether those aldehydes or ketones arise from the filter material on site. Formaldehyde and acetone significantly increased in concentration after the filters of one HVAC system. In parallel experiments microorganisms were proved to be able to survive on air filters. Therefore, a possible source of formaldehyde and acetone might be microbes.
40 CFR 141.711 - Filtered system additional Cryptosporidium treatment requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... either one or a combination of the following: bag filters, bank filtration, cartridge filters, chlorine dioxide, membranes, ozone, or UV, as described in §§ 141.716 through 141.720. (c) Failure by a system in...
40 CFR 141.711 - Filtered system additional Cryptosporidium treatment requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... either one or a combination of the following: bag filters, bank filtration, cartridge filters, chlorine dioxide, membranes, ozone, or UV, as described in §§ 141.716 through 141.720. (c) Failure by a system in...
Multiscale vector fields for image pattern recognition
NASA Technical Reports Server (NTRS)
Low, Kah-Chan; Coggins, James M.
1990-01-01
A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.
Qian, S.; Dunham, M.E.
1996-11-12
A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.
Split-spectrum processing technique for SNR enhancement of ultrasonic guided wave.
Pedram, Seyed Kamran; Fateri, Sina; Gan, Lu; Haig, Alex; Thornicroft, Keith
2018-02-01
Ultrasonic guided wave (UGW) systems are broadly used in several branches of industry where the structural integrity is of concern. In those systems, signal interpretation can often be challenging due to the multi-modal and dispersive propagation of UGWs. This results in degradation of the signals in terms of signal-to-noise ratio (SNR) and spatial resolution. This paper employs the split-spectrum processing (SSP) technique in order to enhance the SNR and spatial resolution of UGW signals using the optimized filter bank parameters in real time scenario for pipe inspection. SSP technique has already been developed for other applications such as conventional ultrasonic testing for SNR enhancement. In this work, an investigation is provided to clarify the sensitivity of SSP performance to the filter bank parameter values for UGWs such as processing bandwidth, filter bandwidth, filter separation and a number of filters. As a result, the optimum values are estimated to significantly improve the SNR and spatial resolution of UGWs. The proposed method is synthetically and experimentally compared with conventional approaches employing different SSP recombination algorithms. The Polarity Thresholding (PT) and PT with Minimization (PTM) algorithms were found to be the best recombination algorithms. They substantially improved the SNR up to 36.9dB and 38.9dB respectively. The outcome of the work presented in this paper paves the way to enhance the reliability of UGW inspections. Copyright © 2017 Elsevier B.V. All rights reserved.
Design and Implementation of an Underlay Control Channel for Cognitive Radios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daryl Wasden; Hussein Moradi; Behrouz Farhang-Boroujeny
Implementation of any cognitive radio network requires an effective control channel that can operate under various modes of activity from the primary users. This paper reports the design and implementation of a filter bank multicarrier spread spectrum (FBMC-SS) system for use as the control channel in cognitive radio networks. The proposed design is based on a filtered multitone (FMT) implementation. Carrier and timing acquisition and tracking methods as well as a blind channel estimation method are developed for the proposed control channel. We also report an implementation of the proposed FBMC-SS system on a hardware platform; a FlexRIO FPGA modulemore » from National Instruments.« less
High-resolution land cover classification using low resolution global data
NASA Astrophysics Data System (ADS)
Carlotto, Mark J.
2013-05-01
A fusion approach is described that combines texture features from high-resolution panchromatic imagery with land cover statistics derived from co-registered low-resolution global databases to obtain high-resolution land cover maps. The method does not require training data or any human intervention. We use an MxN Gabor filter bank consisting of M=16 oriented bandpass filters (0-180°) at N resolutions (3-24 meters/pixel). The size range of these spatial filters is consistent with the typical scale of manmade objects and patterns of cultural activity in imagery. Clustering reduces the complexity of the data by combining pixels that have similar texture into clusters (regions). Texture classification assigns a vector of class likelihoods to each cluster based on its textural properties. Classification is unsupervised and accomplished using a bank of texture anomaly detectors. Class likelihoods are modulated by land cover statistics derived from lower resolution global data over the scene. Preliminary results from a number of Quickbird scenes show our approach is able to classify general land cover features such as roads, built up area, forests, open areas, and bodies of water over a wide range of scenes.
Filter bank common spatial patterns in mental workload estimation.
Arvaneh, Mahnaz; Umilta, Alberto; Robertson, Ian H
2015-01-01
EEG-based workload estimation technology provides a real time means of assessing mental workload. Such technology can effectively enhance the performance of the human-machine interaction and the learning process. When designing workload estimation algorithms, a crucial signal processing component is the feature extraction step. Despite several studies on this field, the spatial properties of the EEG signals were mostly neglected. Since EEG inherently has a poor spacial resolution, features extracted individually from each EEG channel may not be sufficiently efficient. This problem becomes more pronounced when we use low-cost but convenient EEG sensors with limited stability which is the case in practical scenarios. To address this issue, in this paper, we introduce a filter bank common spatial patterns algorithm combined with a feature selection method to extract spatio-spectral features discriminating different mental workload levels. To evaluate the proposed algorithm, we carry out a comparative analysis between two representative types of working memory tasks using data recorded from an Emotiv EPOC headset which is a mobile low-cost EEG recording device. The experimental results showed that the proposed spatial filtering algorithm outperformed the state-of-the algorithms in terms of the classification accuracy.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2005-01-01
In-flight sensor fault detection and isolation (FDI) is critical to maintaining reliable engine operation during flight. The aircraft engine control system, which computes control commands on the basis of sensor measurements, operates the propulsion systems at the demanded conditions. Any undetected sensor faults, therefore, may cause the control system to drive the engine into an undesirable operating condition. It is critical to detect and isolate failed sensors as soon as possible so that such scenarios can be avoided. A challenging issue in developing reliable sensor FDI systems is to make them robust to changes in engine operating characteristics due to degradation with usage and other faults that can occur during flight. A sensor FDI system that cannot appropriately account for such scenarios may result in false alarms, missed detections, or misclassifications when such faults do occur. To address this issue, an enhanced bank of Kalman filters was developed, and its performance and robustness were demonstrated in a simulation environment. The bank of filters is composed of m + 1 Kalman filters, where m is the number of sensors being used by the control system and, thus, in need of monitoring. Each Kalman filter is designed on the basis of a unique fault hypothesis so that it will be able to maintain its performance if a particular fault scenario, hypothesized by that particular filter, takes place.
17. VIEW OF AIR LOCK ENTRY DOOR. BANKS OF AIR ...
17. VIEW OF AIR LOCK ENTRY DOOR. BANKS OF AIR FILTERS ARE VISIBLE TO THE SIDES OF THE DOORS. THE BUILDING WAS DIVIDED INTO ZONES BY AIRLOCK DOORS AND AIR FILTERS. AIR PRESSURE DIFFERENTIALS WERE MAINTAINED IN THE ZONES, SUCH THAT AIRFLOW WAS PROGRESSIVELY TOWARD AREAS WITH THE HIGHEST POTENTIAL FOR CONTAMINATION. (9/24/91) - Rocky Flats Plant, Plutonium Manufacturing Facility, North-central section of Plant, just south of Building 776/777, Golden, Jefferson County, CO
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
The Environmental Technology Verification report discusses the technology and performance of the AeroStar FP-98 Minipleat V-Bank Filter air filter for dust and bioaerosol filtration manufactured by Filtration Group. The pressure drop across the filter was 137 Pa clean and 348 Pa ...
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
NASA Astrophysics Data System (ADS)
Cartwright, I.; Gilfedder, B.; Hofmann, H.
2013-05-01
This study compares geochemical and physical methods of estimating baseflow in the upper reaches of the Barwon River, southeast Australia. Estimates of baseflow from physical techniques such as local minima and recursive digital filters are higher than those based on chemical mass balance using continuous electrical conductivity (EC). Between 2001 and 2011 the baseflow flux calculated using chemical mass balance is between 1.8 × 103 and 1.5 × 104 ML yr-1 (15 to 25% of the total discharge in any one year) whereas recursive digital filters yield baseflow fluxes of 3.6 × 103 to 3.8 × 104 ML yr-1 (19 to 52% of discharge) and the local minimum method yields baseflow fluxes of 3.2 × 103 to 2.5 × 104 ML yr-1 (13 to 44% of discharge). These differences most probably reflect how the different techniques characterise baseflow. Physical methods probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow or floodplain storage) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The mismatch between geochemical and physical estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months. Consistent with these interpretations, modelling of bank storage indicates that bank return flows provide water to the river for several weeks after flood events. EC vs. discharge variations during individual flow events also imply that an inflow of low EC water stored within the banks or on the floodplain occurs as discharge falls. The joint use of physical and geochemical techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization.
Nisar, Shibli; Khan, Omar Usman; Tariq, Muhammad
2016-01-01
Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.
Time-Scale Modification of Complex Acoustic Signals in Noise
1994-02-04
of a response from a closing stapler . 15 6 Short-time processing of long waveforms. 16 7 Time-scale expansion (x 2) of sequence of transients using...filter bank/overlap- add. 17 8 Time-scale expansion (x2) of a closing stapler using filter bank/overlap-add. 18 9 Composite subband time-scale...INTRODUCTION Short-duration complex sounds, as from the closing of a stapler or the tapping of a drum stick, often consist of a series of brief
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
OFDM-Based Signal Explotation Using Quadrature Mirror Filter Bank (QMFB) Processing
2012-03-01
need in childhood as strong as the need for a father’s protection. Sigmund Freud vi Acknowledgments First I would like to thank to my...Real part of the filtered signal Q = imag(I); % Imaginary part of the filtered signal % Saving the data to the same directory save
Region based feature extraction from non-cooperative iris images using triplet half-band filter bank
NASA Astrophysics Data System (ADS)
Barpanda, Soubhagya Sankar; Majhi, Banshidhar; Sa, Pankaj Kumar
2015-09-01
In this paper, we have proposed energy based features using a multi-resolution analysis (MRA) on iris template. The MRA is based on our suggested triplet half-band filter bank (THFB). The THFB derivation process is discussed in detail. The iris template is divided into six equispaced sub-templates and two level decomposition has been made to each sub-template using THFB except second one. The reason for discarding the second template is due to the fact that it mostly contains the noise due to eyelids, eyelashes, and occlusion due to segmentation failure. Subsequently, energy features are derived from the decomposed coefficients of each sub-template. The proposed feature has been experimented on standard databases like CASIAv3, UBIRISv1, and IITD and mostly on iris images which encounter a segmentation failure. Comparative analysis has been done with existing features based on Gabor transform, Fourier transform, and CDF 9/7 filter bank. The proposed scheme shows superior performance with respect to FAR, GAR and AUC.
Information Encoding on a Pseudo Random Noise Radar Waveform
2013-03-01
quadrature mirror filter bank (QMFB) tree diagram [18] . . . . . . . . . . . 18 2.7 QMFB layer 3 contour plot for 7-bit barker code binary phase shift...test signal . . . . . . . . 20 2.9 Block diagram of the FFT accumulation method (FAM) time smoothing method to estimate the spectral correlation ... Samples A m pl itu de (b) Correlator output for an WGN pulse in a AWGN channel Figure 2.2: Effectiveness of correlation for SNR = -10 dB 10 2.3 Radar
Finger-vein and fingerprint recognition based on a feature-level fusion method
NASA Astrophysics Data System (ADS)
Yang, Jinfeng; Hong, Bofeng
2013-07-01
Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Qian, Shie; Dunham, Mark E.
1996-01-01
A system and method for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos{2.pi..phi.(t)} and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {.phi.'(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series (also known as the Gabor spectrogram). The joint time-frequency transformation represents the analyzed signal energy at time t and frequency .function., P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function .phi.'(t) which best fits the multivalued function f(t), a trajectory of the joint time-frequency domain representation of x(t). Integrating .phi.'(t) along t yields .phi.(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template.
Leukocyte-reduced blood components: patient benefits and practical applications.
Higgins, V L
1996-05-01
To review the various types of filters used for red blood cell and platelet transfusions and to explain the trend in the use of leukocyte removal filters, practical information about their use, considerations in the selection of a filtration method, and cost-effectiveness issues. Published articles, books, and the author's experience. Leukocyte removal filters are used to reduce complications associated with transfused white blood cells that are contained in units of red blood cells and platelets. These complications include nonhemolytic febrile transfusion reactions (NHFTRs), alloimmunization and refractoriness to platelet transfusion, transfusion-transmitted cytomegalovirus (CMV), and immunomodulation. Leukocyte removal filters may be used at the bedside, in a hospital blood bank, or in a blood collection center. Factors that affect the flow rate of these filters include the variations in the blood component, the equipment used, and filter priming. Studies on the cost-effectiveness of using leukocyte-reduced blood components demonstrate savings based on the reduction of NHFTRs, reduction in the number of blood components used, and the use of filtered blood components as the equivalent of CMV seronegative-screened products. The use of leukocyte-reduced blood components significantly diminishes or prevents many of the adverse transfusion reactions associated with donor white blood cells. Leukocyte removal filters are cost-effective, and filters should be selected based on their ability to consistently achieve low leukocyte residual levels as well as their ease of use. Physicians may order leukocyte-reduced blood components for specific patients, or the components may be used because of an established institutional transfusion policy. Nurses often participate in deciding on a filtration method, primarily based on ease of use. Understanding the considerations in selecting a filtration method will help nurses make appropriate decisions to ensure quality patient care.
Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2003-01-01
In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.
Flood induced infiltration affecting a bank filtrate well at the River Enns, Austria
NASA Astrophysics Data System (ADS)
Wett, Bernhard; Jarosch, Hannes; Ingerle, Kurt
2002-09-01
Bank filtration employs a natural filtration process of surface water on its flow path from the river to the well. The development of a stable filter layer is of major importance to the quality of the delivered water. Flooding is expected to destabilise the riverbed, to reduce the filter efficiency of the bank and therefore to endanger the operation of water supply facilities near the riverbank. This paper provides an example of how bank storage in an unconfined alluvial aquifer causes a significant decrease of the seepage rate after a high-water event. Extensive monitoring equipment has been installed in the river bank of the oligotrophic alpine River Enns focusing on the first metre of the flow path. Head losses measured by multilevel probes throughout a year characterise the development of the hydraulic conductivity of different riverbed layers. Concentration profiles of nitrate, total ions and a NaCl tracer have been used to study infiltration rates of river water and its dilution with groundwater. Dynamic modelling was applied in order to investigate the propagation of flood induced head elevation and transport of pollutants.
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
Secure information transmission in filter bank multi-carrier spread spectrum systems
Majid, Arslan; Moradi, Hussein; Farhang-Boroujeny, Behrouz
2015-12-17
This report discusses the issue of secure information transmission for a spread-spectrum system, which in our case is Filter-Bank Multi-Carrier spread spectrum (FB-MC SS). We develop a novel method for generating a secret key to augment the security of the spread spectrum system. The proposed key generation takes advantage of the channel reciprocity exhibited between two communicating parties.We validate the key generation aspect of our system by using real-world measurements. It is found that our augmentation of strongest path cancellation (SPC) is shown to be highly effective in our measurement scenarios where the adversary’s key would otherwise be significantly correlatedmore » with the legitimate nodes. Our approach in using the proposed key generation method as a part of FB-MC SS allows for it to be fault tolerant and it is not necessarily limited to FB-MC SS or spread-spectrum system in general. However, the advantage that our approach has in the domain of spread-spectrum security is that it significantly decorrelates the adversary’s key from the authentic parties. This aspect is crucial because if the adversary’s key is similar to the legitamate parties, then the adversary obtains a sizable advantage due to the fault tolerance nature of the developed spread spectrum key.« less
Secure information transmission in filter bank multi-carrier spread spectrum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majid, Arslan; Moradi, Hussein; Farhang-Boroujeny, Behrouz
This report discusses the issue of secure information transmission for a spread-spectrum system, which in our case is Filter-Bank Multi-Carrier spread spectrum (FB-MC SS). We develop a novel method for generating a secret key to augment the security of the spread spectrum system. The proposed key generation takes advantage of the channel reciprocity exhibited between two communicating parties.We validate the key generation aspect of our system by using real-world measurements. It is found that our augmentation of strongest path cancellation (SPC) is shown to be highly effective in our measurement scenarios where the adversary’s key would otherwise be significantly correlatedmore » with the legitimate nodes. Our approach in using the proposed key generation method as a part of FB-MC SS allows for it to be fault tolerant and it is not necessarily limited to FB-MC SS or spread-spectrum system in general. However, the advantage that our approach has in the domain of spread-spectrum security is that it significantly decorrelates the adversary’s key from the authentic parties. This aspect is crucial because if the adversary’s key is similar to the legitamate parties, then the adversary obtains a sizable advantage due to the fault tolerance nature of the developed spread spectrum key.« less
Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo
2012-10-01
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.
2017-03-01
2016.7485263.] 14. SUBJECT TERMS parameter estimation; matched- filter detection; QPSK; radar; interference; LSE, cyber, electronic warfare 15. NUMBER OF...signal is routed through a maximum-likelihood detector (MLD), which is a bank of four filters matched to the four symbols of the QPSK constellation... filters matched for each of the QPSK symbols is used to demodulate the signal after cancellation. The matched filters are defined as the complex
Air-to-Air Missile Vector Scoring
2012-03-22
SIR sampling-importance resampling . . . . . . . . . . . . . . 53 EPF extended particle filter . . . . . . . . . . . . . . . . . . . . 54 UPF unscented...particle filter ( EPF ) or a unscented particle fil- ter (UPF) [20]. The basic concept is to apply a bank of N EKF or UKF filters to move particles from...Merwe, Doucet, Freitas and Wan provide a comprehensive discussion on the EPF and UPF, including algorithms for implementation [20]. 2Result based on
Battery Charge Equalizer with Transformer Array
NASA Technical Reports Server (NTRS)
Davies, Francis
2013-01-01
High-power batteries generally consist of a series connection of many cells or cell banks. In order to maintain high performance over battery life, it is desirable to keep the state of charge of all the cell banks equal. A method provides individual charging for battery cells in a large, high-voltage battery array with a minimum number of transformers while maintaining reasonable efficiency. This is designed to augment a simple highcurrent charger that supplies the main charge energy. The innovation will form part of a larger battery charge system. It consists of a transformer array connected to the battery array through rectification and filtering circuits. The transformer array is connected to a drive circuit and a timing and control circuit that allow individual battery cells or cell banks to be charged. The timing circuit and control circuit connect to a charge controller that uses battery instrumentation to determine which battery bank to charge. It is important to note that the innovation can charge an individual cell bank at the same time that the main battery charger is charging the high-voltage battery. The fact that the battery cell banks are at a non-zero voltage, and that they are all at similar voltages, can be used to allow charging of individual cell banks. A set of transformers can be connected with secondary windings in series to make weighted sums of the voltages on the primaries.
Parallel Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Shah, Biren N.; Hinedi, Sami M.
1995-01-01
Wide-band microwave receivers of proposed type include digital phase-locked loops in which band-pass filtering and down-conversion of input signals implemented by banks of multirate digital filters operating in parallel. Called "parallel digital phase-locked loops" to distinguish them from other digital phase-locked loops. Systems conceived as cost-effective solution to problem of filtering signals at high sampling rates needed to accommodate wide input frequency bands. Each of M filters process 1/M of spectrum of signal.
Systolic Signal Processor/High Frequency Direction Finding
1990-10-01
MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description
Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.
Sil Kar, Sudeshna; Maity, Santi P
2016-09-01
Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
FIR filters for hardware-based real-time multi-band image blending
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Leblebici, Yusuf
2015-02-01
Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Speech enhancement based on modified phase-opponency detectors
NASA Astrophysics Data System (ADS)
Deshmukh, Om D.; Espy-Wilson, Carol Y.
2005-09-01
A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
2016-05-03
24 Mel-scaled filters applied on squared FFT magnitudes (critical band energies, CRBE) and 10 F0-related coefficients. The filter- bank spans...Acknowledgements This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory
Gigabit Digital Filter Bank: Digital Backend Subsystem in the VERA Data-Acquisition System
NASA Astrophysics Data System (ADS)
Iguchi, Satoru; Kkurayama, Tomoharu; Kawaguchi, Noriyuki; Kawakami, Kazuyuki
2005-02-01
The VERA terminal is a new data-acquisition system developed for the VERA project, which is a project to construct a new Japanese VLBI array dedicated to make a 3-D map of our Milky Way Galaxy in terms of high-precision astrometry. New technology, a gigabit digital filter, was introduced in the development. The importance and advantages of a digital filter for radio astronomy have been studied as follows: (1) the digital filter can realize a variety of observation modes and maintain compatibility with different data-acquisition systems (Kiuchi et al. 1997 and Iguchi et al. 2000a), (2) the folding noise occurring in the sampling process can be reduced by combination with a higher-order sampling technique (Iguchi, Kawaguchi 2002), (3) and an ideal sharp cut-off bandedge and a flat amplitude/phase responses are approached by using a large number of taps available to use LSI of a large number of logic cells (Iguchi et al. 2000a). We developed the custom Finite Impulse Response filter chips and manufactured the Gigabit Digital Filter Banks (GDFBs) as a digital backend subsystem in the VERA terminal. In this paper, the design and development of the GDFB are presented in detail, and the performances and demonstrations of the developed GDFB are shown.
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
NASA Astrophysics Data System (ADS)
Poppeliers, C.; Preston, L. A.
2017-12-01
Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.
Breakthrough of cyanobacteria in bank filtration.
Pazouki, Pirooz; Prévost, Michèle; McQuaid, Natasha; Barbeau, Benoit; de Boutray, Marie-Laure; Zamyadi, Arash; Dorner, Sarah
2016-10-01
The removal of cyanobacteria cells in well water following bank filtration was investigated from a source water consisting of two artificial lakes (A and B). Phycocyanin probes used to monitor cyanobacteria in the source and in filtered well water showed an increase of fluorescence values demonstrating a progressive seasonal growth of cyanobacteria in the source water that were correlated with cyanobacterial biovolumes from taxonomic counts (r = 0.59, p < 0.00001). A strong correlation was observed between the cyanobacterial concentrations in the lake water and in the well water as measured by the phycocyanin probe (p < 0.001, 0.73 ≤ r(2) ≤ 0.94). Log removals from bank filtration estimated from taxonomic counts ranged from 0.96 ± (0.5) and varied according to the species of cyanobacteria. Of cyanobacteria that passed through bank filtration, smaller cells were significantly more frequent in well water samples (p < 0.05) than larger cells. Travel times from the lakes to the wells were estimated as 2 days for Lake B and 10 days for Lake A. Cyanobacterial species in the wells were most closely related to species found in Lake B. Thus, a travel time of less than 1 week permitted the breakthrough of cyanobacteria to wells. Winter samples demonstrated that cyanobacteria accumulate within bank filters, leading to continued passage of cells beyond the bloom season. Although no concentrations of total microcystin-LR were above detection limits in filtered well water, there is concern that cyanobacterial cells that reach the wells have the potential to contain intracellular toxins. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Tabak, D.
1979-01-01
The study involves the bank of filters approach to analytical redundancy management since this is amenable to microelectronic implementation. Attention is given to a study of the UD factorized filter to determine if it gives more accurate estimates than the standard Kalman filter when data processing word size is reduced. It is reported that, as the word size is reduced, the effect of modeling error dominates the filter performance of the two filters. However, the UD filter is shown to maintain a slight advantage in tracking performance. It is concluded that because of the UD filter's stability in the serial processing mode, it remains the leading candidate for microelectronic implementation.
Raynor, P C; Kim, B G; Ramachandran, G; Strommen, M R; Horns, J H; Streifel, A J
2008-02-01
Synthetic filters made from fibers carrying electrostatic charges and fiberglass filters that do not carry electrostatic charges are both utilized commonly in heating, ventilating, and air-conditioning (HVAC) systems. The pressure drop and efficiency of a bank of fiberglass filters and a bank of electrostatically charged synthetic filters were measured repeatedly for 13 weeks in operating HVAC systems at a hospital. Additionally, the efficiency with which new and used fiberglass and synthetic filters collected culturable biological particles was measured in a test apparatus. Pressure drop measurements adjusted to equivalent flows indicated that the synthetic filters operated with a pressure drop less than half that of the fiberglass filters throughout the test. When measured using total ambient particles, synthetic filter efficiency decreased during the test period for all particle diameters. For particles 0.7-1.0 mum in diameter, efficiency decreased from 92% to 44%. It is hypothesized that this reduction in collection efficiency may be due to charge shielding. Efficiency did not change significantly for the fiberglass filters during the test period. However, when measured using culturable biological particles in the ambient air, efficiency was essentially the same for new filters and filters used for 13 weeks in the hospital for both the synthetic and fiberglass filters. It is hypothesized that the lack of efficiency reduction for culturable particles may be due to their having higher charge than non-biological particles, allowing them to overcome the effects of charge shielding. The type of particles requiring capture may be an important consideration when comparing the relative performance of electrostatically charged synthetic and fiberglass filters. Electrostatically charged synthetic filters with high initial efficiency can frequently replace traditional fiberglass filters with lower efficiency in HVAC systems because properly designed synthetic filters offer less resistance to air flow. Although the efficiency of charged synthetic filters at collecting non-biological particles declined substantially with use, the efficiency of these filters at collecting biological particles remained steady. These findings suggest that the merits of electrostatically charged synthetic HVAC filters relative to fiberglass filters may be more pronounced if collection of biological particles is of primary concern.
Texture analysis applied to second harmonic generation image data for ovarian cancer classification
NASA Astrophysics Data System (ADS)
Wen, Bruce L.; Brewer, Molly A.; Nadiarnykh, Oleg; Hocker, James; Singh, Vikas; Mackie, Thomas R.; Campagnola, Paul J.
2014-09-01
Remodeling of the extracellular matrix has been implicated in ovarian cancer. To quantitate the remodeling, we implement a form of texture analysis to delineate the collagen fibrillar morphology observed in second harmonic generation microscopy images of human normal and high grade malignant ovarian tissues. In the learning stage, a dictionary of "textons"-frequently occurring texture features that are identified by measuring the image response to a filter bank of various shapes, sizes, and orientations-is created. By calculating a representative model based on the texton distribution for each tissue type using a training set of respective second harmonic generation images, we then perform classification between images of normal and high grade malignant ovarian tissues. By optimizing the number of textons and nearest neighbors, we achieved classification accuracy up to 97% based on the area under receiver operating characteristic curves (true positives versus false positives). The local analysis algorithm is a more general method to probe rapidly changing fibrillar morphologies than global analyses such as FFT. It is also more versatile than other texture approaches as the filter bank can be highly tailored to specific applications (e.g., different disease states) by creating customized libraries based on common image features.
Deep Filter Banks for Texture Recognition, Description, and Segmentation.
Cimpoi, Mircea; Maji, Subhransu; Kokkinos, Iasonas; Vedaldi, Andrea
Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.
Document image binarization using "multi-scale" predefined filters
NASA Astrophysics Data System (ADS)
Saabni, Raid M.
2018-04-01
Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.
2016-09-02
the fractionally-spaced channel estimators and the short feedforward equalizer filters . Receiver algorithm is applied to real data transmitted at 10...multichannel decision-feedback equalizer (DFE)[1]. This receiver consists of a bank of adaptive feedforwad filters , one per array element, followed by a...decision-feedback filter . It has been implemented in the prototype high-rate acoustic modem developed at the Woods Hole Oceanographic Institution, and
Design of tree structured matched wavelet for HRV signals of menstrual cycle.
Rawal, Kirti; Saini, B S; Saini, Indu
2016-07-01
An algorithm is presented for designing a new class of wavelets matched to the Heart Rate Variability (HRV) signals of the menstrual cycle. The proposed wavelets are used to find HRV variations between phases of menstrual cycle. The method finds the signal matching characteristics by minimising the shape feature error using Least Mean Square method. The proposed filter banks are used for the decomposition of the HRV signal. For reconstructing the original signal, the tree structure method is used. In this approach, decomposed sub-bands are selected based upon their energy in each sub-band. Thus, instead of using all sub-bands for reconstruction, sub-bands having high energy content are used for the reconstruction of signal. Thus, a lower number of sub-bands are required for reconstruction of the original signal which shows the effectiveness of newly created filter coefficients. Results show that proposed wavelets are able to differentiate HRV variations between phases of the menstrual cycle accurately than standard wavelets.
Spatio-Temporal Equalizer for a Receiving-Antenna Feed Array
NASA Technical Reports Server (NTRS)
Mukai, Ryan; Lee, Dennis; Vilnrotter, Victor
2010-01-01
A spatio-temporal equalizer has been conceived as an improved means of suppressing multipath effects in the reception of aeronautical telemetry signals, and may be adaptable to radar and aeronautical communication applications as well. This equalizer would be an integral part of a system that would also include a seven-element planar array of receiving feed horns centered at the focal point of a paraboloidal antenna that would be nominally aimed at or near the aircraft that would be the source of the signal that one seeks to receive (see Figure 1). This spatio-temporal equalizer would consist mostly of a bank of seven adaptive finite-impulse-response (FIR) filters one for each element in the array - and the outputs of the filters would be summed (see Figure 2). The combination of the spatial diversity of the feedhorn array and the temporal diversity of the filter bank would afford better multipath-suppression performance than is achievable by means of temporal equalization alone. The seven-element feed array would supplant the single feed horn used in a conventional paraboloidal ground telemetry-receiving antenna. The radio-frequency telemetry signals re ceiv ed by the seven elements of the array would be digitized, converted to complex baseband form, and sent to the FIR filter bank, which would adapt itself in real time to enable reception of telemetry at a low bit error rate, even in the presence of multipath of the type found at many flight test ranges.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-06-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-11-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Extraction of latent images from printed media
NASA Astrophysics Data System (ADS)
Sergeyev, Vladislav; Fedoseev, Victor
2015-12-01
In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
NASA Technical Reports Server (NTRS)
Dal Canton, Tito; Harry, Ian W.
2017-01-01
We describe the methodology and novel techniques used to construct a set of waveforms, or template bank, applicable to searches for compact binary coalescences in Advanced LIGO's second observing run. This template bank is suitable for observing systems composed of two neutron stars, two black holes, or a neutron star and a black hole. The Post-Newtonian formulation is used to model waveforms with total mass less than 4 Solar Mass and the most recent effective-one-body model, calibrated to numerical relativity to include the merger and ringdown, is used for total masses greater than 4 Solar Mass. The effects of spin precession, matter, orbital eccentricity and radiation modes beyond the quadrupole are neglected. In contrast to the template bank used to search for compact binary mergers in Advanced LIGO's first observing run, here we are including binary-black-hole systems with total mass up to several hundreds of solar masses, thereby improving the ability to observe such systems. We introduce a technique to vary the starting frequency of waveform filters so that our bank can simultaneously contain binary-neutron-star and high-mass binary-black hole waveforms. We also introduce a lower-bound on the filter waveform length, to exclude very short-duration, high-mass templates whose sensitivity is strongly reduced by the characteristics and performance of the interferometers.
NASA Technical Reports Server (NTRS)
Peach, Robert; Malarky, Alastair
1990-01-01
Currently proposed mobile satellite communications systems require a high degree of flexibility in assignment of spectral capacity to different geographic locations. Conventionally this results in poor spectral efficiency which may be overcome by the use of bandwidth switchable filtering. Surface acoustic wave (SAW) technology makes it possible to provide banks of filters whose responses may be contiguously combined to form variable bandwidth filters with constant amplitude and phase responses across the entire band. The high selectivity possible with SAW filters, combined with the variable bandwidth capability, makes it possible to achieve spectral efficiencies over the allocated bandwidths of greater than 90 percent, while retaining full system flexibility. Bandwidth switchable SAW filtering (BSSF) achieves these gains with a negligible increase in hardware complexity.
Implementing a search for gravitational waves from binary black holes with nonprecessing spin
NASA Astrophysics Data System (ADS)
Capano, Collin; Harry, Ian; Privitera, Stephen; Buonanno, Alessandra
2016-06-01
Searching for gravitational waves (GWs) from binary black holes (BBHs) with LIGO and Virgo involves matched-filtering data against a set of representative signal waveforms—a template bank—chosen to cover the full signal space of interest with as few template waveforms as possible. Although the component black holes may have significant angular momenta (spin), previous searches for BBHs have filtered LIGO and Virgo data using only waveforms where both component spins are zero. This leads to a loss of signal-to-noise ratio for signals where this is not the case. Combining the best available template placement techniques and waveform models, we construct a template bank of GW signals from BBHs with component spins χ1 ,2∈[-0.99 ,0.99 ] aligned with the orbital angular momentum, component masses m1 ,2∈[2 ,48 ]M⊙ , and total mass Mtotal≤50 M⊙ . Using effective-one-body waveforms with spin effects, we show that less than 3% of the maximum signal-to-noise ratio (SNR) of these signals is lost due to the discreetness of the bank, using the early Advanced LIGO noise curve. We use simulated Advanced LIGO noise to compare the sensitivity of this bank to a nonspinning bank covering the same parameter space. In doing so, we consider the competing effects between improved SNR and signal-based vetoes and the increase in the rate of false alarms of the aligned-spin bank due to covering a larger parameter space. We find that the aligned-spin bank can be a factor of 1.3-5 more sensitive than a nonspinning bank to BBHs with dimensionless spins >+0.6 and component masses ≳20 M⊙ . Even larger gains are obtained for systems with equally high spins but smaller component masses.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Adaptive EMG noise reduction in ECG signals using noise level approximation
NASA Astrophysics Data System (ADS)
Marouf, Mohamed; Saranovac, Lazar
2017-12-01
In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.
Gabor filter for the segmentation of skin lesions from ultrasonographic images
NASA Astrophysics Data System (ADS)
Petrella, Lorena I.; Gómez, W.; Alvarenga, André V.; Pereira, Wagner C. A.
2012-05-01
The present work applies Gabor filters bank for texture analysis of skin lesions images, obtained by ultrasound biomicroscopy. The regions affected by the lesions were differentiated from surrounding tissue in all the analyzed cases; however the accuracy of the traced borders showed some limitations in part of the images. Future steps are being contemplated, attempting to enhance the technique performance.
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Becker, H; Bialasiewicz, A A; Schaudig, U; Schäfer, H; von Domarus, D
2002-05-01
A new data bank developed for ophthalmopathology using a computer-generated, multidigital data code is expected to be able to accomplish complex clinicopathologic correlations of diagnoses and signs, as provided by (multiple) clinical events and histopathologically proven etiologies, and to facilitate the documentation of new data. In the ophthalmopathology laboratory 2890 eyes were examined between January 20, 1975 and December 12, 1996. The main diagnoses and patient data from this 22-year period were recorded. To facilitate the presentation of data, a 10-year period with eyes of 976 patients enucleated from December, 1986 to December, 1996 was chosen. Principal and secondary diagnoses served for establishing the data bank. The frequencies of successive histologic and clinical diagnoses were evaluated by a descriptive computing program using an SPSS-multi-response mode with dummy variables and a categorical variable listing of the software (SPSS version 10.0) classified as (a) non-filtered random, (b) filtered by multiple etiologies, and (c) filtered by multiple events. The principal groups (e.g., histologic diagnoses concerning etiology) and subgroups (e.g., trauma, neoplasia, surgery, systemic diseases, and inflammations) were defined and correlated with 798 separate diagnoses. From 11 diagnoses/events ascribed to the clinical cases, 11,198 namings resulted. Thus, a comparative study of complex etiologies and events leading to enucleation in different hospitals of a specific area may be performed using this electronic ophthalmopathologic data bank system. The complexity of rare disease and integration into a superimposed structure can be managed with this custom-made data bank. A chronologically and demographically oriented consideration of reasons for enucleation is thus feasible.
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
22. DETAIL TO NORTHWEST OF LUBRICATING OIL TANKS AND FILTERS ...
22. DETAIL TO NORTHWEST OF LUBRICATING OIL TANKS AND FILTERS FOR UNITS 1-4 (CENTER), AND UNIT 3 GOVERNOR (RIGHT CENTER FOREGROUND) AND GATE VALVE (RIGHT CENTER BACKGROUND) CONTROLS, OLD POWERHOUSE GENERATOR FLOOR - Trenton Falls Hydroelectric Station, Powerhouse & Substation, On west bank of West Canada Creek, along Trenton Falls Road, 1.25 miles north of New York Route 28, Trenton Falls, Oneida County, NY
HF band filter bank multi-carrier spread spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz
Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less
GPU Acceleration of DSP for Communication Receivers.
Gunther, Jake; Gunther, Hyrum; Moon, Todd
2017-09-01
Graphics processing unit (GPU) implementations of signal processing algorithms can outperform CPU-based implementations. This paper describes the GPU implementation of several algorithms encountered in a wide range of high-data rate communication receivers including filters, multirate filters, numerically controlled oscillators, and multi-stage digital down converters. These structures are tested by processing the 20 MHz wide FM radio band (88-108 MHz). Two receiver structures are explored: a single channel receiver and a filter bank channelizer. Both run in real time on NVIDIA GeForce GTX 1080 graphics card.
High speed analog-to-digital conversion with silicon photonics
NASA Astrophysics Data System (ADS)
Holzwarth, C. W.; Amatya, R.; Araghchini, M.; Birge, J.; Byun, H.; Chen, J.; Dahlem, M.; DiLello, N. A.; Gan, F.; Hoyt, J. L.; Ippen, E. P.; Kärtner, F. X.; Khilo, A.; Kim, J.; Kim, M.; Motamedi, A.; Orcutt, J. S.; Park, M.; Perrott, M.; Popovic, M. A.; Ram, R. J.; Smith, H. I.; Zhou, G. R.; Spector, S. J.; Lyszczarz, T. M.; Geis, M. W.; Lennon, D. M.; Yoon, J. U.; Grein, M. E.; Schulein, R. T.; Frolov, S.; Hanjani, A.; Shmulovich, J.
2009-02-01
Sampling rates of high-performance electronic analog-to-digital converters (ADC) are fundamentally limited by the timing jitter of the electronic clock. This limit is overcome in photonic ADC's by taking advantage of the ultra-low timing jitter of femtosecond lasers. We have developed designs and strategies for a photonic ADC that is capable of 40 GSa/s at a resolution of 8 bits. This system requires a femtosecond laser with a repetition rate of 2 GHz and timing jitter less than 20 fs. In addition to a femtosecond laser this system calls for the integration of a number of photonic components including: a broadband modulator, optical filter banks, and photodetectors. Using silicon-on-insulator (SOI) as the platform we have fabricated these individual components. The silicon optical modulator is based on a Mach-Zehnder interferometer architecture and achieves a VπL of 2 Vcm. The filter banks comprise 40 second-order microring-resonator filters with a channel spacing of 80 GHz. For the photodetectors we are exploring ion-bombarded silicon waveguide detectors and germanium films epitaxially grown on silicon utilizing a process that minimizes the defect density.
Computer-aided teniae coli detection using height maps from computed tomographic colonography images
NASA Astrophysics Data System (ADS)
Wei, Zhuoshi; Yao, Jianhua; Wang, Shijun; Summers, Ronald M.
2011-03-01
Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer screening. Teniae coli are three bands of longitudinal smooth muscle on the colon surface. They are parallel, equally distributed on the colon wall, and form a triple helix structure from the appendix to the sigmoid colon. Because of their characteristics, teniae coli are important anatomical meaningful landmarks on human colon. This paper proposes a novel method for teniae coli detection on CT colonography. We first unfold the three-dimensional (3D) colon using a reversible projection technique and compute the two-dimensional (2D) height map of the unfolded colon. The height map records the elevation of colon surface relative to the unfolding plane, where haustral folds corresponding to high elevation points and teniae to low elevation points. The teniae coli are detected on the height map and then projected back to the 3D colon. Since teniae are located where the haustral folds meet, we break down the problem by first detecting haustral folds. We apply 2D Gabor filter banks to extract fold features. The maximum response of the filter banks is then selected as the feature image. The fold centers are then identified based on piecewise thresholding on the feature image. Connecting the fold centers yields a path of the folds. Teniae coli are finally extracted as lines running between the fold paths. Experiments were carried out on 7 cases. The proposed method yielded a promising result with an average normalized RMSE of 5.66% and standard deviation of 4.79% of the circumference of the colon.
Identifiability of Additive Actuator and Sensor Faults by State Augmentation
NASA Technical Reports Server (NTRS)
Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.
2014-01-01
A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.
Ranging through Gabor logons-a consistent, hierarchical approach.
Chang, C; Chatterjee, S
1993-01-01
In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.
Detecting binary neutron star systems with spin in advanced gravitational-wave detectors
NASA Astrophysics Data System (ADS)
Brown, Duncan A.; Harry, Ian; Lundgren, Andrew; Nitz, Alexander H.
2012-10-01
The detection of gravitational waves from binary neutron stars is a major goal of the gravitational-wave observatories Advanced LIGO and Advanced Virgo. Previous searches for binary neutron stars with LIGO and Virgo neglected the component stars’ angular momentum (spin). We demonstrate that neglecting spin in matched-filter searches causes advanced detectors to lose more than 3% of the possible signal-to-noise ratio for 59% (6%) of sources, assuming that neutron star dimensionless spins, cJ/GM2, are uniformly distributed with magnitudes between 0 and 0.4 (0.05) and that the neutron stars have isotropically distributed spin orientations. We present a new method for constructing template banks for gravitational-wave searches for systems with spin. We present a new metric in a parameter space in which the template placement metric is globally flat. This new method can create template banks of signals with nonzero spins that are (anti-)aligned with the orbital angular momentum. We show that this search loses more than 3% of the maximum signal-to-noise for only 9% (0.2%) of binary neutron star sources with dimensionless spins between 0 and 0.4 (0.05) and isotropic spin orientations. Use of this template bank will prevent selection bias in gravitational-wave searches and allow a more accurate exploration of the distribution of spins in binary neutron stars.
Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat
2017-09-01
Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1978-04-01
A signaling technique for Emergency Position Indicating Radio Beacons (EPIRBs) using FSK modulation, with detection employing a bank of narrow-band filters, was designed and implemented by the Federal Republic of Germany for ESA tests with the ATS-6 ...
Forward and backward tone mapping of high dynamic range images based on subband architecture
NASA Astrophysics Data System (ADS)
Bouzidi, Ines; Ouled Zaid, Azza
2015-01-01
This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.
Wavelet Analyses of F/A-18 Aeroelastic and Aeroservoelastic Flight Test Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1997-01-01
Time-frequency signal representations combined with subspace identification methods were used to analyze aeroelastic flight data from the F/A-18 Systems Research Aircraft (SRA) and aeroservoelastic data from the F/A-18 High Alpha Research Vehicle (HARV). The F/A-18 SRA data were produced from a wingtip excitation system that generated linear frequency chirps and logarithmic sweeps. HARV data were acquired from digital Schroeder-phased and sinc pulse excitation signals to actuator commands. Nondilated continuous Morlet wavelets implemented as a filter bank were chosen for the time-frequency analysis to eliminate phase distortion as it occurs with sliding window discrete Fourier transform techniques. Wavelet coefficients were filtered to reduce effects of noise and nonlinear distortions identically in all inputs and outputs. Cleaned reconstructed time domain signals were used to compute improved transfer functions. Time and frequency domain subspace identification methods were applied to enhanced reconstructed time domain data and improved transfer functions, respectively. Time domain subspace performed poorly, even with the enhanced data, compared with frequency domain techniques. A frequency domain subspace method is shown to produce better results with the data processed using the Morlet time-frequency technique.
Filter Bank Multicarrier (FBMC) for long-reach intensity modulated optical access networks
NASA Astrophysics Data System (ADS)
Saljoghei, Arsalan; Gutiérrez, Fernando A.; Perry, Philip; Barry, Liam P.
2017-04-01
Filter Bank Multi Carrier (FBMC) is a modulation scheme which has recently attracted significant interest in both wireless and optical communications. The interest in optical communications arises due to FBMC's capability to operate without a Cyclic Prefix (CP) and its high resilience to synchronisation errors. However, the operation of FBMC in optical access networks has not been extensively studied either in downstream or upstream. In this work we use experimental work to investigate the operation of FBMC in intensity modulated Passive Optical Networks (PONs) employing direct detection in conjunction with both direct and external modulation schemes. The data rates and propagation lengths employed here vary from 8.4 to 14.8 Gb/s and 0-75 km. The results suggest that by using FBMC it is possible to accomplish CP-Less transmission up to 75 km of SSMF in passive links using cost effective intensity modulation and detection schemes.
Audio fingerprint extraction for content identification
NASA Astrophysics Data System (ADS)
Shiu, Yu; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
In this work, we present an audio content identification system that identifies some unknown audio material by comparing its fingerprint with those extracted off-line and saved in the music database. We will describe in detail the procedure to extract audio fingerprints and demonstrate that they are robust to noise and content-preserving manipulations. The main feature in the proposed system is the zero-crossing rate extracted with the octave-band filter bank. The zero-crossing rate can be used to describe the dominant frequency in each subband with a very low computational cost. The size of audio fingerprint is small and can be efficiently stored along with the compressed files in the database. It is also robust to many modifications such as tempo change and time-alignment distortion. Besides, the octave-band filter bank is used to enhance the robustness to distortion, especially those localized on some frequency regions.
Wavelet transform: fundamentals, applications, and implementation using acousto-optic correlators
NASA Astrophysics Data System (ADS)
DeCusatis, Casimer M.; Koay, J.; Litynski, Daniel M.; Das, Pankaj K.
1995-10-01
In recent years there has been a great deal of interest in the use of wavelets to supplement or replace conventional Fourier transform signal processing. This paper provides a review of wavelet transforms for signal processing applications, and discusses several emerging applications which benefit from the advantages of wavelets. The wavelet transform can be implemented as an acousto-optic correlator; perfect reconstruction of digital signals may also be achieved using acousto-optic finite impulse response filter banks. Acousto-optic image correlators are discussed as a potential implementation of the wavelet transform, since a 1D wavelet filter bank may be encoded as a 2D image. We discuss applications of the wavelet transform including nondestructive testing of materials, biomedical applications in the analysis of EEG signals, and interference excision in spread spectrum communication systems. Computer simulations and experimental results for these applications are also provided.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Power connect safety and connection interlock
NASA Technical Reports Server (NTRS)
Rippel, Wally E. (Inventor)
1992-01-01
A power connect safety and connection interlock system is shown for use with inverters and other DC loads (16) which include capacitor filter banks (14) at their DC inputs. A safety circuit (20) operates a spring (26) biased, solenoid (22) driven mechanical connection interference (24) which prevents mating and therefore electrical connection between the power contactor halves (11, 13) of the main power contacts (12) until the capacitor bank is safely precharged through auxiliary contacts (18). When the DC load (16) is shut down, the capacitor bank (14) is automatically discharged through a discharging power resistor (66) by a MOSFET transistor (60) through a discharging power resistor (66) only when both the main power contacts and auxiliary contacts are disconnected.
Badrinarayan, Preethi; Sastry, G Narahari
2012-04-01
In this work, we introduce the development and application of a three-step scoring and filtering procedure for the design of type II p38 MAP kinase leads using allosteric fragments extracted from virtual screening hits. The design of the virtual screening filters is based on a thorough evaluation of docking methods, DFG-loop conformation, binding interactions and chemotype specificity of the 138 p38 MAP kinase inhibitors from Protein Data Bank bound to DFG-in and DFG-out conformations using Glide, GOLD and CDOCKER. A 40 ns molecular dynamics simulation with the apo, type I with DFG-in and type II with DFG-out forms was carried out to delineate the effects of structural variations on inhibitor binding. The designed docking-score and sub-structure filters were first tested on a dataset of 249 potent p38 MAP kinase inhibitors from seven diverse series and 18,842 kinase inhibitors from PDB, to gauge their capacity to discriminate between kinase and non-kinase inhibitors and likewise to selectively filter-in target-specific inhibitors. The designed filters were then applied in the virtual screening of a database of ten million (10⁷) compounds resulting in the identification of 100 hits. Based on their binding modes, 98 allosteric fragments were extracted from the hits and a fragment library was generated. New type II p38 MAP kinase leads were designed by tailoring the existing type I ATP site binders with allosteric fragments using a common urea linker. Target specific virtual screening filters can thus be easily developed for other kinases based on this strategy to retrieve target selective compounds. Copyright © 2012 Elsevier Inc. All rights reserved.
A dual-polarized broadband planar antenna and channelizing filter bank for millimeter wavelengths
NASA Astrophysics Data System (ADS)
O'Brient, Roger; Ade, Peter; Arnold, Kam; Edwards, Jennifer; Engargiola, Greg; Holzapfel, William L.; Lee, Adrian T.; Myers, Michael J.; Quealy, Erin; Rebeiz, Gabriel; Richards, Paul; Suzuki, Aritoki
2013-02-01
We describe the design, fabrication, and testing of a broadband log-periodic antenna coupled to multiple cryogenic bolometers. This detector architecture, optimized here for astrophysical observations, simultaneously receives two linear polarizations with two octaves of bandwidth at millimeter wavelengths. The broad bandwidth signal received by the antenna is divided into sub-bands with integrated in-line frequency-selective filters. We demonstrate two such filter banks: a diplexer with two sub-bands and a log-periodic channelizer with seven contiguous sub-bands. These detectors have receiver efficiencies of 20%-40% and percent level polarization isolation. Superconducting transition-edge sensor bolometers detect the power in each sub-band and polarization. We demonstrate circularly symmetric beam patterns, high polarization isolation, accurately positioned bands, and high optical efficiency. The pixel design is applicable to astronomical observations of intensity and polarization at millimeter through sub-millimeter wavelengths. As compared with an imaging array of pixels measuring only one band, simultaneous measurements of multiple bands in each pixel has the potential to result in a higher signal-to-noise measurement while also providing spectral information. This development facilitates compact systems with high mapping speeds for observations that require information in multiple frequency bands.
Is the difference between chemical and numerical estimates of baseflow meaningful?
NASA Astrophysics Data System (ADS)
Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald
2014-05-01
Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
Automatic computational labeling of glomerular textural boundaries
NASA Astrophysics Data System (ADS)
Ginley, Brandon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
The glomerulus, a specialized bundle of capillaries, is the blood filtering unit of the kidney. Each human kidney contains about 1 million glomeruli. Structural damages in the glomerular micro-compartments give rise to several renal conditions; most severe of which is proteinuria, where excessive blood proteins flow freely to the urine. The sole way to confirm glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual scoring on the number and volume of structures. Computational quantification of equivalent features promises to greatly ease this manual burden. The largest obstacle to computational quantification of renal tissue is the ability to recognize complex glomerular textural boundaries automatically. Here we present a computational pipeline to accurately identify glomerular boundaries with high precision and accuracy. The computational pipeline employs an integrated approach composed of Gabor filtering, Gaussian blurring, statistical F-testing, and distance transform, and performs significantly better than standard Gabor based textural segmentation method. Our integrated approach provides mean accuracy/precision of 0.89/0.97 on n = 200Hematoxylin and Eosin (HE) glomerulus images, and mean 0.88/0.94 accuracy/precision on n = 200 Periodic Acid Schiff (PAS) glomerulus images. Respective accuracy/precision of the Gabor filter bank based method is 0.83/0.84 for HE and 0.78/0.8 for PAS. Our method will simplify computational partitioning of glomerular micro-compartments hidden within dense textural boundaries. Automatic quantification of glomeruli will streamline structural analysis in clinic, and can help realize real time diagnoses and interventions.
Texture classification of normal tissues in computed tomography using Gabor filters
NASA Astrophysics Data System (ADS)
Dettori, Lucia; Bashir, Alia; Hasemann, Julie
2007-03-01
The research presented in this article is aimed at developing an automated imaging system for classification of normal tissues in medical images obtained from Computed Tomography (CT) scans. Texture features based on a bank of Gabor filters are used to classify the following tissues of interests: liver, spleen, kidney, aorta, trabecular bone, lung, muscle, IP fat, and SQ fat. The approach consists of three steps: convolution of the regions of interest with a bank of 32 Gabor filters (4 frequencies and 8 orientations), extraction of two Gabor texture features per filter (mean and standard deviation), and creation of a Classification and Regression Tree-based classifier that automatically identifies the various tissues. The data set used consists of approximately 1000 DIACOM images from normal chest and abdominal CT scans of five patients. The regions of interest were labeled by expert radiologists. Optimal trees were generated using two techniques: 10-fold cross-validation and splitting of the data set into a training and a testing set. In both cases, perfect classification rules were obtained provided enough images were available for training (~65%). All performance measures (sensitivity, specificity, precision, and accuracy) for all regions of interest were at 100%. This significantly improves previous results that used Wavelet, Ridgelet, and Curvelet texture features, yielding accuracy values in the 85%-98% range The Gabor filters' ability to isolate features at different frequencies and orientations allows for a multi-resolution analysis of texture essential when dealing with, at times, very subtle differences in the texture of tissues in CT scans.
Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake
NASA Astrophysics Data System (ADS)
Meier, M. A.; Heaton, T. H.; Clinton, J. F.
2014-12-01
Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then confirmed when updates based on more data from stations at farther distances become available. Because these early estimates saturate at ~M6.5, however, the magnitude estimate might have had to be considered a minimum bound.
Detection of Melanoma Skin Cancer in Dermoscopy Images
NASA Astrophysics Data System (ADS)
Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui
2017-02-01
Malignant melanoma is the most hazardous type of human skin cancer and its incidence has been rapidly increasing. Early detection of malignant melanoma in dermoscopy images is very important and critical, since its detection in the early stage can be helpful to cure it. Computer Aided Diagnosis systems can be very helpful to facilitate the early detection of cancers for dermatologists. In this paper, we present a novel method for the detection of melanoma skin cancer. To detect the hair and several noises from images, pre-processing step is carried out by applying a bank of directional filters. And therefore, Image inpainting method is implemented to fill in the unknown regions. Fuzzy C-Means and Markov Random Field methods are used to delineate the border of the lesion area in the images. The method was evaluated on a dataset of 200 dermoscopic images, and superior results were produced compared to alternative methods.
1981-12-01
Haverhill, a gabion mattress ( revet - ment) underlaid with filter fabric was placed on the bank. (2) Concrete Blocks . Precast concrete blocks with filter... revetment - precast cellular concrete block mattress, used auto tire wall and used auto tire mattress. All three revetment panels included vegetative...DISTRIBUTION STATEMENT A NOV 81982 Approved fog public release; ,.* Diatribuatofl UnlimitedB - A. B .4. R ock roe With Tie-Backs Precast Block Paving
Sensor failure detection system. [for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.
1981-01-01
Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-108). March 2005. FAN ROOM WITH STAIR TO FILTER BANKS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
NASA Astrophysics Data System (ADS)
Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku
2014-12-01
Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.
Medical Image Retrieval Using Multi-Texton Assignment.
Tang, Qiling; Yang, Jirong; Xia, Xianfu
2018-02-01
In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.
Rendezvous radar modification and evaluation. [for space shuttles
NASA Technical Reports Server (NTRS)
1976-01-01
The purpose of this effort was to continue the implementation and evaluation of the changes necessary to add the non-cooperative mode capability with frequency diversity and a doppler filter bank to the Apollo Rendezvous Radar while retaining the cooperative mode capability.
Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform
NASA Astrophysics Data System (ADS)
Chakraborty, Jayasree; Mukhopadhyay, Sudipta; Rangayyan, Rangaraj M.; Sadhu, Anup; Azevedo-Marques, P. M.
2013-02-01
The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.
Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator
NASA Astrophysics Data System (ADS)
Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong
2011-04-01
In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.
NASA Astrophysics Data System (ADS)
Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.
2018-06-01
Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Translation and Rotation Invariant Multiscale Image Registration
2002-03-01
be computed without human interaction. This allows for the automation of image registration [16]. According to Tashakkori et al [35], the correlation...Intelligence, 21 (10):1074– 1081 (October 1999). 34. Strang, G. and T. Nguyen. Wavelets and Filter Banks . Wellesley, Cambridge, 1996. 35. Tashakkori
Methyl tert-butyl ether (MTBE) in finished drinking water in Germany.
Kolb, Axel; Püttmann, Wilhelm
2006-03-01
In the present study 83 finished drinking water samples from 50 cities in Germany were analyzed for methyl tert-butyl ether (MTBE) content with a detection limit of 10 ng/L. The detection frequency was 46% and the concentrations ranged between 17 and 712 ng/L. Highest concentrations were found in the community water systems (CWSs) of Leuna and Spergau in Saxony-Anhalt. These CWSs are supplied with water possibly affected by MTBE contaminated groundwater. MTBE was detected at concentrations lower than 100 ng/L in drinking water supplied by CWSs using bank filtered water from Rhine and Main Rivers. The results from Leuna and Spergau show that large groundwater contaminations in the vicinity of CWSs pose the highest risk for MTBE contamination in drinking water. CWSs using bank filtered water from Rhine and Main Rivers are susceptible to low MTBE contaminations in finished drinking water. All measured MTBE concentrations were below proposed limit values for drinking water.
Parallel digital modem using multirate digital filter banks
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Vaidyanathan, P. P.; Raphaeli, Dan; Hinedi, Sami
1994-01-01
A new class of architectures for an all-digital modem is presented in this report. This architecture, referred to as the parallel receiver (PRX), is based on employing multirate digital filter banks (DFB's) to demodulate, track, and detect the received symbol stream. The resulting architecture is derived, and specifications are outlined for designing the DFB for the PRX. The key feature of this approach is a lower processing rate then either the Nyquist rate or the symbol rate, without any degradation in the symbol error rate. Due to the freedom in choosing the processing rate, the designer is able to arbitrarily select and use digital components, independent of the speed of the integrated circuit technology. PRX architecture is particularly suited for high data rate applications, and due to the modular structure of the parallel signal path, expansion to even higher data rates is accommodated with each. Applications of the PRX would include gigabit satellite channels, multiple spacecraft, optical links, interactive cable-TV, telemedicine, code division multiple access (CDMA) communications, and others.
NASA Astrophysics Data System (ADS)
Pham, Dzung L.; Han, Xiao; Rettmann, Maryam E.; Xu, Chenyang; Tosun, Duygu; Resnick, Susan; Prince, Jerry L.
2002-05-01
In previous work, the authors presented a multi-stage procedure for the semi-automatic reconstruction of the cerebral cortex from magnetic resonance images. This method suffered from several disadvantages. First, the tissue classification algorithm used can be sensitive to noise within the image. Second, manual interaction was required for masking out undesired regions of the brain image, such as the ventricles and putamen. Third, iterated median filters were used to perform a topology correction on the initial cortical surface, resulting in an overly smoothed initial surface. Finally, the deformable surface used to converge to the cortex had difficulty capturing narrow gyri. In this work, all four disadvantages of the procedure have been addressed. A more robust tissue classification algorithm is employed and the manual masking step is replaced by an automatic method involving level set deformable models. Instead of iterated median filters, an algorithm developed specifically for topology correction is used. The last disadvantage is addressed using an algorithm that artificially separates adjacent sulcal banks. The new procedure is more automated but also more accurate than the previous one. Its utility is demonstrated by performing a preliminary study on data from the Baltimore Longitudinal Study of Aging.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
On-board multicarrier demodulator for mobile applications using DSP implementation
NASA Astrophysics Data System (ADS)
Yim, W. H.; Kwan, C. C. D.; Coakley, F. P.; Evans, B. G.
1990-11-01
This paper describes the design and implementation of an on-board multicarrier demodulator using commercial digital signal processors. This is for use in a mobile satellite communication system employing an up-link SCPC/FDMA scheme. Channels are separated by a flexible multistage digital filter bank followed by a channel multiplexed digital demodulator array. The cross/dot product design approach of error detector leads to a new QPSK frequency control algorithm that allows fast acquisition without special preamble pattern. Timing correction is performed digitally using an extended stack of polyphase sub-filters.
Airborne radar technology for windshear detection
NASA Technical Reports Server (NTRS)
Hibey, Joseph L.; Khalaf, Camille S.
1988-01-01
The objectives and accomplishments of the two-and-a-half year effort to describe how returns from on-board Doppler radar are to be used to detect the presence of a wind shear are reported. The problem is modeled as one of first passage in terms of state variables, the state estimates are generated by a bank of extended Kalman filters working in parallel, and the decision strategy involves the use of a voting algorithm for a series of likelihood ratio tests. The performance issue for filtering is addressed in terms of error-covariance reduction and filter divergence, and the performance issue for detection is addressed in terms of using a probability measure transformation to derive theoretical expressions for the error probabilities of a false alarm and a miss.
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
Pharmacophore screening of the protein data bank for specific binding site chemistry.
Campagna-Slater, Valérie; Arrowsmith, Andrew G; Zhao, Yong; Schapira, Matthieu
2010-03-22
A simple computational approach was developed to screen the Protein Data Bank (PDB) for putative pockets possessing a specific binding site chemistry and geometry. The method employs two commonly used 3D screening technologies, namely identification of cavities in protein structures and pharmacophore screening of chemical libraries. For each protein structure, a pocket finding algorithm is used to extract potential binding sites containing the correct types of residues, which are then stored in a large SDF-formatted virtual library; pharmacophore filters describing the desired binding site chemistry and geometry are then applied to screen this virtual library and identify pockets matching the specified structural chemistry. As an example, this approach was used to screen all human protein structures in the PDB and identify sites having chemistry similar to that of known methyl-lysine binding domains that recognize chromatin methylation marks. The selected genes include known readers of the histone code as well as novel binding pockets that may be involved in epigenetic signaling. Putative allosteric sites were identified on the structures of TP53BP1, L3MBTL3, CHEK1, KDM4A, and CREBBP.
Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.
Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur
2006-06-01
We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.
A 10 micron heterodyne receiver for ultra high resolution astronomical spectroscopy
NASA Technical Reports Server (NTRS)
Buhl, D.; Chin, G.; Faris, J.; Kostiuk, T.; Mumma, M. J.; Zipoy, D.
1980-01-01
An improved CO2 laser heterodyne spectrometer is examined. The present system uses reflective optics to eliminate refocusing at different wavelengths, and the local oscillator is a line-center-stabilized isotopic CO2 laser. A tunable diffraction grating makes possible easy and rapid selection of over 50 transitions per isotope of CO2. The IF (0 to 1.6 GHz) from the HgCdTe photomizer is analyzed by a 128-channel filter bank, consisting of 64 tunable 5-MHz filters and 64 fixed 25-MHz RF filters. These filters provide resolving powers of about 1,000,000 to 10,000,000 and velocity resolution of 50 to 250 m/sec; their output is synchronously detected, integrated, multiplexed and stored in a buffer memory for the desired integration period. Kitt Peak observations show the wide spectral coverage, wide mixer and electronics bandwidth, and high sensitivity of the system.
Aircraft Engine Sensor/Actuator/Component Fault Diagnosis Using a Bank of Kalman Filters
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L. (Technical Monitor)
2003-01-01
In this report, a fault detection and isolation (FDI) system which utilizes a bank of Kalman filters is developed for aircraft engine sensor and actuator FDI in conjunction with the detection of component faults. This FDI approach uses multiple Kalman filters, each of which is designed based on a specific hypothesis for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, from which a specific fault is isolated. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The performance of the FDI system is evaluated against a nonlinear engine simulation for various engine faults at cruise operating conditions. In order to mimic the real engine environment, the nonlinear simulation is executed not only at the nominal, or healthy, condition but also at aged conditions. When the FDI system designed at the healthy condition is applied to an aged engine, the effectiveness of the FDI system is impacted by the mismatch in the engine health condition. Depending on its severity, this mismatch can cause the FDI system to generate incorrect diagnostic results, such as false alarms and missed detections. To partially recover the nominal performance, two approaches, which incorporate information regarding the engine s aging condition in the FDI system, will be discussed and evaluated. The results indicate that the proposed FDI system is promising for reliable diagnostics of aircraft engines.
NASA Astrophysics Data System (ADS)
Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina
2010-08-01
Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation effectiveness of the system on the 140 ft telescope at Green Bank Observatory.
2013-01-01
Background Accurate and complete identification of mobile elements is a challenging task in the current era of sequencing, given their large numbers and frequent truncations. Group II intron retroelements, which consist of a ribozyme and an intron-encoded protein (IEP), are usually identified in bacterial genomes through their IEP; however, the RNA component that defines the intron boundaries is often difficult to identify because of a lack of strong sequence conservation corresponding to the RNA structure. Compounding the problem of boundary definition is the fact that a majority of group II intron copies in bacteria are truncated. Results Here we present a pipeline of 11 programs that collect and analyze group II intron sequences from GenBank. The pipeline begins with a BLAST search of GenBank using a set of representative group II IEPs as queries. Subsequent steps download the corresponding genomic sequences and flanks, filter out non-group II introns, assign introns to phylogenetic subclasses, filter out incomplete and/or non-functional introns, and assign IEP sequences and RNA boundaries to the full-length introns. In the final step, the redundancy in the data set is reduced by grouping introns into sets of ≥95% identity, with one example sequence chosen to be the representative. Conclusions These programs should be useful for comprehensive identification of group II introns in sequence databases as data continue to rapidly accumulate. PMID:24359548
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition
NASA Astrophysics Data System (ADS)
Meckley, John R.
1995-09-01
The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.
Deen, John; Cano, Jean Paul; Batista, Laura; Pijoan, Carlos
2006-01-01
Abstract The purpose of this study was to compare 4 methods for the reduction of aerosol transmission of Porcine reproductive and respiratory syndrome virus (PRRSV): high-efficiency particulate air (HEPA) filtration, 2×-low-cost filtration, bag filtration, and use of a filter tested against particles derived from dioctylphthalate (DOP). The HEPA-filtration system used a prefilter screen, a bag filter (Eurovent [EU] 8 rating), and a HEPA filter (EU13 rating). The low-cost-filtration system contained mosquito netting (prefilter), 2 fiberglass furnace filters, and 2 electrostatic furnace filters. Bag filtration involved the use of a filter rated EU8 and a minimum efficiency reporting value (MERV) of 14. The 95%-DOP, 0.3-μm-filtration system involved a pleat-in-pleat V-bank disposable filter with a 95% efficiency rating for particles 0.3 μm or greater in diameter and ratings of EU9 and MERV 15. No form of intervention was used in the control group. The experimental facilities consisted of 2 chambers connected by a 1.3-m-long duct containing the treatments. Recipient pigs, housed in chamber 2, were exposed to artificial aerosols created by a mechanically operated mister containing modified live PRRSV vaccine located in chamber 1. Aerosol transmission of PRRSV occurred in 0 of the 10 HEPA-filtration replicates, 2 of the 10 bag-filtration replicates, 4 of the 10 low-cost-filtration replicates, 0 of the 10 95%-DOP, 0.3-μm-filtration replicates, and all 10 of the control replicates. Using a similar approach, we further evaluated the HEPA- and 95%-DOP, 0.3-μm-filtration systems. Infection was not observed in any of the 76 HEPA-filtration replicates but was observed in 2 of the 76 95%-DOP, 0.3-μm replicates and 42 of the 50 control replicates. Although the difference between the 95%-DOP, 0.3-μm and control replicates was significant (P < 0.0005), so was the level of failure of the 95%-DOP, 0.3-μm system (P = 0.02). In conclusion, under the conditions of this study, some methods of air filtration were significantly better than others in reducing aerosol transmission of PRRSV, and HEPA filtration was the only system that completely prevented transmission. PMID:16850938
Novel palmprint representations for palmprint recognition
NASA Astrophysics Data System (ADS)
Li, Hengjian; Dong, Jiwen; Li, Jinping; Wang, Lei
2015-02-01
In this paper, we propose a novel palmprint recognition algorithm. Firstly, the palmprint images are represented by the anisotropic filter. The filters are built on Gaussian functions along one direction, and on second derivative of Gaussian functions in the orthogonal direction. Also, this choice is motivated by the optimal joint spatial and frequency localization of the Gaussian kernel. Therefore,they can better approximate the edge or line of palmprint images. A palmprint image is processed with a bank of anisotropic filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, subspace analysis is then applied to the feature vectors for dimension reduction as well as class separability. Experimental results on a public palmprint database show that the accuracy could be improved by the proposed novel representations, compared with Gabor.
Improved EEG Event Classification Using Differential Energy.
Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J
2015-12-01
Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.
Link performance model for filter bank based multicarrier systems
NASA Astrophysics Data System (ADS)
Petrov, Dmitry; Oborina, Alexandra; Giupponi, Lorenza; Stitz, Tobias Hidalgo
2014-12-01
This paper presents a complete link level abstraction model for link quality estimation on the system level of filter bank multicarrier (FBMC)-based networks. The application of mean mutual information per coded bit (MMIB) approach is validated for the FBMC systems. The considered quality measure of the resource element for the FBMC transmission is the received signal-to-noise-plus-distortion ratio (SNDR). Simulation results of the proposed link abstraction model show that the proposed approach is capable of estimating the block error rate (BLER) accurately, even when the signal is propagated through the channels with deep and frequent fades, as it is the case for the 3GPP Hilly Terrain (3GPP-HT) and Enhanced Typical Urban (ETU) models. The FBMC-related results of link level simulations are compared with cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) analogs. Simulation results are also validated through the comparison to reference publicly available results. Finally, the steps of link level abstraction algorithm for FBMC are formulated and its application for system level simulation of a professional mobile radio (PMR) network is discussed.
Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi
2010-01-01
We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.
A Bio-Realistic Analog CMOS Cochlea Filter With High Tunability and Ultra-Steep Roll-Off.
Wang, Shiwei; Koickal, Thomas Jacob; Hamilton, Alister; Cheung, Rebecca; Smith, Leslie S
2015-06-01
This paper presents the design and experimental results of a cochlea filter in analog very large scale integration (VLSI) which highly resembles physiologically measured response of the mammalian cochlea. The filter consists of three specialized sub-filter stages which respectively provide passive response in low frequencies, actively tunable response in mid-band frequencies and ultra-steep roll-off at transition frequencies from pass-band to stop-band. The sub-filters are implemented in balanced ladder topology using floating active inductors. Measured results from the fabricated chip show that wide range of mid-band tuning including gain tuning of over 20 dB, Q factor tuning from 2 to 19 as well as the bio-realistic center frequency shift are achieved by adjusting only one circuit parameter. Besides, the filter has an ultra-steep roll-off reaching over 300 dB/dec. By changing biasing currents, the filter can be configured to operate with center frequencies from 31 Hz to 8 kHz. The filter is 9th order, consumes 59.5 ∼ 90.0 μW power and occupies 0.9 mm2 chip area. A parallel bank of the proposed filter can be used as the front-end in hearing prosthesis devices, speech processors as well as other bio-inspired auditory systems owing to its bio-realistic behavior, low power consumption and small size.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Detection and Classification of Whale Acoustic Signals
NASA Astrophysics Data System (ADS)
Xian, Yin
This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.
NASA Technical Reports Server (NTRS)
Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.
2016-01-01
In a communications channel, the space environment between a spacecraft and an Earth ground station can potentially cause the loss of a data link or at least degrade its performance due to atmospheric effects, shadowing, multipath, or other impairments. In adaptive and coded modulation, the signal power level at the receiver can be used in order to choose a modulation-coding technique that maximizes throughput while meeting bit error rate (BER) and other performance requirements. It is the goal of this research to implement a generalized interacting multiple model (IMM) filter based on Kalman filters for improved received power estimation on software-dened radio (SDR) technology for satellite communications applications. The IMM filter has been implemented in Verilog consisting of a customizable bank of Kalman filters for choosing between performance and resource utilization. Each Kalman filter can be implemented using either solely a Schur complement module (for high area efficiency) or with Schur complement, matrix multiplication, and matrix addition modules (for high performance). These modules were simulated and synthesized for the Virtex II platform on the JPL Radio Experimenter Development System (EDS) at NASA Glenn Research Center. The results for simulation, synthesis, and hardware testing are presented.
Estimation of the center frequency of the highest modulation filter.
Moore, Brian C J; Füllgrabe, Christian; Sek, Aleksander
2009-02-01
For high-frequency sinusoidal carriers, the threshold for detecting sinusoidal amplitude modulation increases when the signal modulation frequency increases above about 120 Hz. Using the concept of a modulation filter bank, this effect might be explained by (1) a decreasing sensitivity or greater internal noise for modulation filters with center frequencies above 120 Hz; and (2) a limited span of center frequencies of the modulation filters, the top filter being tuned to about 120 Hz. The second possibility was tested by measuring modulation masking in forward masking using an 8 kHz sinusoidal carrier. The signal modulation frequency was 80, 120, or 180 Hz and the masker modulation frequencies covered a range above and below each signal frequency. Four highly trained listeners were tested. For the 80-Hz signal, the signal threshold was usually maximal when the masker frequency equaled the signal frequency. For the 180-Hz signal, the signal threshold was maximal when the masker frequency was below the signal frequency. For the 120-Hz signal, two listeners showed the former pattern, and two showed the latter pattern. The results support the idea that the highest modulation filter has a center frequency in the range 100-120 Hz.
NASA Astrophysics Data System (ADS)
Outerbridge, Gregory John, II
Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.
Implementation of real-time digital signal processing systems
NASA Technical Reports Server (NTRS)
Narasimha, M.; Peterson, A.; Narayan, S.
1978-01-01
Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.
Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lee, Chih-Chung
2006-12-01
Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.
The Development of improved willow clones for eastern North America
R. F. Kopp; L. B. Smart; C. A. Maynard; J. G. Isebrands; G. A. Tuskan; L. P. Abrahamson
2001-01-01
Efforts aimed at genetic improvement of Salix are increasing in North America.Most of these are directed towards developing improved clones for biomass production, phytoremediation, nutrient filters, and stream bank stabilization in the Northeast and North-central United States. Native species are of primary interest, but a small number of clones containing non-native...
Theory, implementation and applications of nonstationary Gabor frames
Balazs, P.; Dörfler, M.; Jaillet, F.; Holighaus, N.; Velasco, G.
2011-01-01
Signal analysis with classical Gabor frames leads to a fixed time–frequency resolution over the whole time–frequency plane. To overcome the limitations imposed by this rigidity, we propose an extension of Gabor theory that leads to the construction of frames with time–frequency resolution changing over time or frequency. We describe the construction of the resulting nonstationary Gabor frames and give the explicit formula for the canonical dual frame for a particular case, the painless case. We show that wavelet transforms, constant-Q transforms and more general filter banks may be modeled in the framework of nonstationary Gabor frames. Further, we present the results in the finite-dimensional case, which provides a method for implementing the above-mentioned transforms with perfect reconstruction. Finally, we elaborate on two applications of nonstationary Gabor frames in audio signal processing, namely a method for automatic adaptation to transients and an algorithm for an invertible constant-Q transform. PMID:22267893
Multiscale deep features learning for land-use scene recognition
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin; Li, Ning
2018-01-01
The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.
Diurnal characteristics of turbulent intermittency in the Taklimakan Desert
NASA Astrophysics Data System (ADS)
Wei, Wei; Wang, Minzhong; Zhang, Hongsheng; He, Qing; Ali, Mamtimin; Wang, Yinjun
2017-12-01
A case study is performed to investigate the behavior of turbulent intermittency in the Taklimakan Desert using an intuitive, direct, and adaptive method, the arbitrary-order Hilbert spectral analysis (arbitrary-order HSA). Decomposed modes from the vertical wind speed series confirm the dyadic filter-bank essence of the empirical mode decomposition processes. Due to the larger eddies in the CBL, higher energy modes occur during the day. The second-order Hilbert spectra L2 (ω ) delineate the spectral gap separating fine-scale turbulence from large-scale motions. Both the values of kurtosis and the Hilbert-based scaling exponent ξ ( q ) reveal that the turbulence intermittency at night is much stronger than that during the day, and the stronger intermittency is associated with more stable stratification under clear-sky conditions. This study fills the gap in the characteristics of turbulence intermittency in the Taklimakan Desert area using a relatively new method.
The analysis of decimation and interpolation in the linear canonical transform domain.
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.
NASA Astrophysics Data System (ADS)
Torrents-Barrena, Jordina; Puig, Domenec; Melendez, Jaime; Valls, Aida
2016-03-01
Breast cancer is one of the most dangerous diseases that attack women in their 40s worldwide. Due to this fact, it is estimated that one in eight women will develop a malignant carcinoma during their life. In addition, the carelessness of performing regular screenings is an important reason for the increase of mortality. However, computer-aided diagnosis systems attempt to enhance the quality of mammograms as well as the detection of early signs related to the disease. In this paper we propose a bank of Gabor filters to calculate the mean, standard deviation, skewness and kurtosis features by four-sized evaluation windows. Therefore, an active strategy is used to select the most relevant pixels. Finally, a supervised classification stage using two-class support vector machines is utilised through an accurate estimation of kernel parameters. In order to show the development of our methodology based on mammographic image analysis, two main experiments are fulfilled: abnormal/normal breast tissue classification and the ability to detect the different breast cancer types. Moreover, the public screen-film mini-MIAS database is compared with a digitised breast cancer database to evaluate the method robustness. The area under the receiver operating characteristic curve is used to measure the performance of the method. Furthermore, both confusion matrix and accuracy are calculated to assess the results of the proposed algorithm.
12 CFR 227.25 - Unfair balance computation method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section 227.25 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in paragraph...
12 CFR 791.4 - Methods of acting.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Methods of acting. 791.4 Section 791.4 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING THE OPERATIONS OF THE NATIONAL...; PUBLIC OBSERVATION OF NCUA BOARD MEETINGS Rules of NCUA Board Procedure § 791.4 Methods of acting. (a...
12 CFR 1102.303 - Organization and methods of operation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Organization and methods of operation. 1102.303 Section 1102.303 Banks and Banking FEDERAL FINANCIAL INSTITUTIONS EXAMINATION COUNCIL APPRAISER REGULATION Description of Office, Procedures, Public Information § 1102.303 Organization and methods of operation. (a...
NASA Astrophysics Data System (ADS)
Lao, Zhiqiang; Zheng, Xin
2011-03-01
This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram and two features are then extracted from the feature map. These two features will be combined with other mass features to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion. The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to over 500 cancer cases and over 1000 normal cases.
Yongye, Austin B.; Bender, Andreas
2010-01-01
Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135
Stream Bank Stability in Eastern Nebraska
Soenksen, Phillip J.; Turner, Mary J.; Dietsch, Benjamin J.; Simon, Andrew
2003-01-01
Dredged and straightened channels in eastern Nebraska have experienced degradation leading to channel widening by bank failure. Degradation has progressed headward and affected the drainage systems upstream from the modified reaches. This report describes a study that was undertaken to analyze bank stability at selected sites in eastern Nebraska and develop a simplified method for estimating the stability of banks at future study sites. Bank cross sections along straight reaches of channel and geotechnical data were collected at approximately 150 sites in 26 counties of eastern Nebraska. The sites were categorized into three groups based on mapped soil permeability. With increasing permeability of the soil groups, the median cohesion values decreased and the median friction angles increased. Three analytical methods were used to determine if banks were stable (should not fail even when saturated), at risk (should not fail unless saturated), or unstable (should have already failed). The Culmann and Agricultural Research Service methods were based on the Coulomb equation and planar failure; an indirect method was developed that was based on Bishop's simplified method of slices and rotational failure. The maximum angle from horizontal at which the bank would be stable for the given soil and bank height conditions also was computed with the indirect method. Because of few soil shear-strength data, all analyses were based on the assumption of homogeneous banks, which was later shown to be atypical, at least for some banks. Using the Culmann method and assuming no soil tension cracks, 67 percent of all 908 bank sections were identified as stable, 32 percent were at risk, and 1 percent were unstable; when tension cracks were assumed, the results changed to 58 percent stable, 40 percent at risk, and 1 percent unstable. Using the Agricultural Research Service method, 67 percent of all bank sections were identified as stable and 33 percent were at risk. Using the indirect method, 62 percent of all bank sections were identified as stable and 31 percent were at risk; 3 percent were unstable, and 3 percent were outside of the range of the tables developed for the method. For each of the methods that were used, the largest percentage of stable banks and the smallest percentage of at risk banks was for the soil group with the lowest soil permeability and highest median cohesion values. A comparison of the expected stable bank angles for saturated conditions and the surveyed bank angles indicated that many of the surveyed bank angles were considerably less than the maximum expected stable bank angles despite the banks being classified as at risk or unstable. For severely degraded channels along straight reaches this was not expected. It was expected that they would have angles close to the maximum stable angle as they should have been failing from an oversteepened condition. Several explanations are possible. The channel reaches of some study sites have not yet been affected to a significant degree by degradation; study sites were selected throughout individual basins and severe degradation has not yet extended to some sites along upper reaches; and some reaches have experienced aggradation as degradation progresses upstream. Another possibility is that some bank sections have been affected by lateral migration processes, which typically result in shallow bank angles on the inside bend of the channel. Another possibility is that the maximum expected stable bank angles are too steep. The stability methods used were well established and in essential agreement with each other, and there was no reason to question the geometry data. This left non-representative soil data as a probable reason for computed stable bank angles that, at least in some cases, are overly steep. Based on an examination of the cohesion data, to which the stable bank-angle calculations were most sensitive, both vertical and horizontal variability in soil properti
Exploring three faint source detections methods for aperture synthesis radio images
NASA Astrophysics Data System (ADS)
Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.
2015-04-01
Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.
Wavelet-Based Signal Processing for Monitoring Discomfort and Fatigue
2008-06-01
Wigner - Ville distribution ( WVD ), the short-time Fourier transform (STFT) or spectrogram, the Choi-Williams distribution (CWD), the smoothed pseudo Wigner ...has the advantage of being computationally less expensive than other standard techniques, such as the Wigner - Ville distribution ( WVD ), the spectrogram...slopes derived from the spectrogram and the smoothed pseudo Wigner - Ville distribution . Furthermore, slopes derived from the filter bank
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarek Haddadin; Stephen Andrew Laraway; Arslan Majid
This paper proposes and presents the design and implementation of an underlay communication channel (UCC) for 5G cognitive mesh networks. The UCC builds its waveform based on filter bank multicarrier spread spectrum (FB-MCSS) signaling. The use of this novel spread spectrum signaling allows the device-to-device (D2D) user equipments (UEs) to communicate at a level well below noise temperature and hence, minimize taxation on macro-cell/small-cell base stations and their UEs in 5G wireless systems. Moreover, the use of filter banks allows us to avoid those portions of the spectrum that are in use by macro-cell and small-cell users. Hence, both D2D-to-cellularmore » and cellular-to-D2D interference will be very close to none. We propose a specific packet for UCC and develop algorithms for packet detection, timing acquisition and tracking, as well as channel estimation and equalization. We also present the detail of an implementation of the proposed transceiver on a software radio platform and compare our experimental results with those from a theoretical analysis of our packet detection algorithm.« less
A simple structure wavelet transform circuit employing function link neural networks and SI filters
NASA Astrophysics Data System (ADS)
Mu, Li; Yigang, He
2016-12-01
Signal processing by means of analog circuits offers advantages from a power consumption viewpoint. Implementing wavelet transform (WT) using analog circuits is of great interest when low-power consumption becomes an important issue. In this article, a novel simple structure WT circuit in analog domain is presented by employing functional link neural network (FLNN) and switched-current (SI) filters. First, the wavelet base is approximated using FLNN algorithms for giving a filter transfer function that is suitable for simple structure WT circuit implementation. Next, the WT circuit is constructed with the wavelet filter bank, whose impulse response is the approximated wavelet and its dilations. The filter design that follows is based on a follow-the-leader feedback (FLF) structure with multiple output bilinear SI integrators and current mirrors as the main building blocks. SI filter is well suited for this application since the dilation constant across different scales of the transform can be precisely implemented and controlled by the clock frequency of the circuit with the same system architecture. Finally, to illustrate the design procedure, a seventh-order FLNN-approximated Gaussian wavelet is implemented as an example. Simulations have successfully verified that the designed simple structure WT circuit has low sensitivity, low-power consumption and litter effect to the imperfections.
Online fingerprint verification.
Upendra, K; Singh, S; Kumar, V; Verma, H K
2007-01-01
As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications.
Removal of Cryptosporidium parvum in bank filtration systems
NASA Astrophysics Data System (ADS)
Harter, T.; Atwill, E. R.; Hou, L. L.
2003-04-01
The protozoan pathogen Cryptosporidium parvum is a leading cause of waterborne disease. Many surface water systems therefore depend on filtration systems, including bank filtration systems, for the removal of the pathogenic oocysts. To better understand the effectiveness, e.g., of bank filtration systems, we have implemented a series of columns studies under various environmental conditions (column length: 10 cm - 60 cm, flow rates: 0.7 m/d - 30 m/d, ionic strength: 0.01 - 100 mM, filter grain size: 0.2 - 2 mm, various solution chemistry). We show that classic colloid filtration theory is a reasonable tool for predicting the initial breakthrough of C. parvum in pulsed injections of the oocyst through sand columns, although the model does not account for the significant tailing that occurs in C. parvum transport. Application of colloid filtration theory to bank filtration system is further limited by the intrinsic heterogeneity of the geologic systems used for bank filtration. We couple filtration theory with a stochastic subsurface transport approach and with percolation theory to account for the effects of intrinsic heterogeneity. We find that a 1-log removal can be achieved even under relatively adverse conditions (low collision efficiency, high velocity) if 85% - 90% of the sedimentary hydrofacies located within the bank filtration system or of the coarsest known hydrofacies connecting the riverbed with the extraction system has a grain-size distribution with a 10% passing diameter equal to 1 mm. One millimeter is a standard sieve size in sediment analysis.
NASA Astrophysics Data System (ADS)
Udpa, Nitin; Sampat, Mehul P.; Kim, Min Soon; Reece, Gregory P.; Markey, Mia K.
2007-03-01
The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have been developed. They are computed from the distances between manually identified fiducial points on digital photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations determined from the image database is used to reduce the number of false positive detections from the matched filter operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on the calculation of the Breast Retraction Assessment was also evaluated.
Optical security verification for blurred fingerprints
NASA Astrophysics Data System (ADS)
Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.
1998-12-01
Optical fingerprint security verification is gaining popularity, as it has the potential to perform correlation at the speed of light. With advancement in optical security verification techniques, authentication process can be almost foolproof and reliable for financial transaction, banking, etc. In law enforcement, when a fingerprint is obtained from a crime scene, it may be blurred and can be an unhealthy candidate for correlation purposes. Therefore, the blurred fingerprint needs to be clarified before it is used for the correlation process. There are a several different types of blur, such as linear motion blur and defocus blur, induced by aberration of imaging system. In addition, we may or may not know the blur function. In this paper, we propose the non-singularity inverse filtering in frequency/power domain for deblurring known motion-induced blur in fingerprints. This filtering process will be incorporated with the pow spectrum subtraction technique, uniqueness comparison scheme, and the separated target and references planes method in the joint transform correlator. The proposed hardware implementation is a hybrid electronic-optical correlator system. The performance of the proposed system would be verified with computer simulation for both cases: with and without additive random noise corruption.
Major Depression Detection from EEG Signals Using Kernel Eigen-Filter-Bank Common Spatial Patterns.
Liao, Shih-Cheng; Wu, Chien-Te; Huang, Hao-Chuan; Cheng, Wei-Teng; Liu, Yi-Hung
2017-06-14
Major depressive disorder (MDD) has become a leading contributor to the global burden of disease; however, there are currently no reliable biological markers or physiological measurements for efficiently and effectively dissecting the heterogeneity of MDD. Here we propose a novel method based on scalp electroencephalography (EEG) signals and a robust spectral-spatial EEG feature extractor called kernel eigen-filter-bank common spatial pattern (KEFB-CSP). The KEFB-CSP first filters the multi-channel raw EEG signals into a set of frequency sub-bands covering the range from theta to gamma bands, then spatially transforms the EEG signals of each sub-band from the original sensor space to a new space where the new signals (i.e., CSPs) are optimal for the classification between MDD and healthy controls, and finally applies the kernel principal component analysis (kernel PCA) to transform the vector containing the CSPs from all frequency sub-bands to a lower-dimensional feature vector called KEFB-CSP. Twelve patients with MDD and twelve healthy controls participated in this study, and from each participant we collected 54 resting-state EEGs of 6 s length (5 min and 24 s in total). Our results show that the proposed KEFB-CSP outperforms other EEG features including the powers of EEG frequency bands, and fractal dimension, which had been widely applied in previous EEG-based depression detection studies. The results also reveal that the 8 electrodes from the temporal areas gave higher accuracies than other scalp areas. The KEFB-CSP was able to achieve an average EEG classification accuracy of 81.23% in single-trial analysis when only the 8-electrode EEGs of the temporal area and a support vector machine (SVM) classifier were used. We also designed a voting-based leave-one-participant-out procedure to test the participant-independent individual classification accuracy. The voting-based results show that the mean classification accuracy of about 80% can be achieved by the KEFP-CSP feature and the SVM classifier with only several trials, and this level of accuracy seems to become stable as more trials (i.e., <7 trials) are used. These findings therefore suggest that the proposed method has a great potential for developing an efficient (required only a few 6-s EEG signals from the 8 electrodes over the temporal) and effective (~80% classification accuracy) EEG-based brain-computer interface (BCI) system which may, in the future, help psychiatrists provide individualized and effective treatments for MDD patients.
Iterated oversampled filter banks and wavelet frames
NASA Astrophysics Data System (ADS)
Selesnick, Ivan W.; Sendur, Levent
2000-12-01
This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.
Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama
2018-04-19
Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.
A Method for Evaluating Information Security Governance (ISG) Components in Banking Environment
NASA Astrophysics Data System (ADS)
Ula, M.; Ula, M.; Fuadi, W.
2017-02-01
As modern banking increasingly relies on the internet and computer technologies to operate their businesses and market interactions, the threats and security breaches have highly increased in recent years. Insider and outsider attacks have caused global businesses lost trillions of Dollars a year. Therefore, that is a need for a proper framework to govern the information security in the banking system. The aim of this research is to propose and design an enhanced method to evaluate information security governance (ISG) implementation in banking environment. This research examines and compares the elements from the commonly used information security governance frameworks, standards and best practices. Their strength and weakness are considered in its approaches. The initial framework for governing the information security in banking system was constructed from document review. The framework was categorized into three levels which are Governance level, Managerial level, and technical level. The study further conducts an online survey for banking security professionals to get their professional judgment about the ISG most critical components and the importance for each ISG component that should be implemented in banking environment. Data from the survey was used to construct a mathematical model for ISG evaluation, component importance data used as weighting coefficient for the related component in the mathematical model. The research further develops a method for evaluating ISG implementation in banking based on the mathematical model. The proposed method was tested through real bank case study in an Indonesian local bank. The study evidently proves that the proposed method has sufficient coverage of ISG in banking environment and effectively evaluates the ISG implementation in banking environment.
U-235 Holdup Measurements in the 321-M Lathe HEPA Banks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salaymeh, S.R.
The Analytical Development Section of Savannah River Technology Center (SRTC) was requested by the Facilities Decommissioning Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The results of the holdup assays are essential for determining compliance with the Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. This report covers holdup measurements of uranium residue in six high efficiency particulate air (HEPA) filter banks of the A-lathe and B-lathe exhaust systems of the 321-M facility. This report discusses the non-destructive assay measurements,more » assumptions, calculations, and results of the uranium holdup in these six items.« less
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
Shu, Ting; Zhang, Bob
2015-04-01
Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.
Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.
Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q
2010-10-01
Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.
Chang, Chih-Chun; Yeh, Chin-Chuan; Chu, Fang-Yeh
2016-10-01
The Formosa Fun Coast explosion, occurring in a recreational water park located in the Northern Taiwan on 27 June 2015, made 499 people burn-injured. For those who had severe burn trauma, surgical intervention and fluid resuscitation were necessary, and potential blood transfusion therapy could be initiated, especially during and after broad escharotomy. Here, we reviewed the literature regarding transfusion medicine and skin grafting as well as described the practicing experience of combined tissue and blood bank in the burn disaster in Taiwan. It was reported that patients who were severely burn-injured could receive multiple blood transfusions during hospitalization. Since the use of skin graft became a mainstay alternative for wound coverage after the early debridement of burn wounds at the beginning of the 20th century, the development of tissue banking program was initiated. In Taiwan, the tissue banking program was started in 2006. And the first combined tissue and blood bank was established in Far Eastern Memorial Hospital in 2010, equipped with the non-sterile, clean and sterile zones distinctly segregated with a unidirectional movement in the sterile area. The sterile zone was a class 10000 clean room equipped with high efficiency particulate air filter (HEPAF) and positive air pressure ventilation. The combined tissue and blood bank has been able to provide the assigned blood products and tissue graft timely and accurately, with the concepts of centralized management. In the future, the training of tissue and blood bank technicians would be continued and fortified, particularly on the regulation and quality control for further bio- and hemovigilance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Boss, Stephen K.
1996-11-01
A mosaic image of the northern Great Bahama Bank was created from separate gray-scale Landsat images using photo-editing and image analysis software that is commercially available for desktop computers. Measurements of pixel gray levels (relative scale from 0 to 255 referred to as digital number, DN) on the mosaic image were compared to bank-top bathymetry (determined from a network of single-channel, high-resolution seismic profiles), bottom type (coarse sand, sandy mud, barren rock, or reef determined from seismic profiles and diver observations), and vegetative cover (presence and/or absence and relative density of the marine angiosperm Thalassia testudinum determined from diver observations). Results of these analyses indicate that bank-top bathymetry is a primary control on observed pixel DN, bottom type is a secondary control on pixel DN, and vegetative cover is a tertiary influence on pixel DN. Consequently, processing of the gray-scale Landsat mosaic with a directional gradient edge-detection filter generated a physiographic shaded relief image resembling bank-top bathymetric patterns related to submerged physiographic features across the platform. The visibility of submerged karst landforms, Pleistocene eolianite ridges, islands, and possible paleo-drainage patterns created during sea-level lowstands is significantly enhanced on processed images relative to the original mosaic. Bank-margin ooid shoals, platform interior sand bodies, reef edifices, and bidirectional sand waves are features resulting from Holocene carbonate deposition that are also more clearly visible on the new physiographic images. Combined with observational data (single-channel, high-resolution seismic profiles, bottom observations by SCUBA divers, sediment and rock cores) across the northern Great Bahama Bank, these physiographic images facilitate comprehension of areal relations among antecedent platform topography, physical processes, and ensuing depositional patterns during sea-level rise.
30 CFR 56.3130 - Wall, bank, and slope stability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Wall, bank, and slope stability. 56.3130... Mining Methods § 56.3130 Wall, bank, and slope stability. Mining methods shall be used that will maintain wall, bank, and slope stability in places where persons work or travel in performing their assigned...
Portable light transmission measuring system for preserved corneas.
Ventura, Liliane; Jesus, Gabriel Torres de; Oliveira, Gunter Camilo Dablas de; Sousa, Sidney J F
2005-12-22
The authors have developed a small portable device for the objective measurement of the transparency of corneas stored in preservative medium, for use by eye banks in evaluation prior to transplantation. The optical system consists of a white light, lenses, and pinholes that collimate the white light beams and illuminate the cornea in its preservative medium, and an optical filter (400-700 nm) that selects the range of the wavelength of interest. A sensor detects the light that passes through the cornea, and the average corneal transparency is displayed. In order to obtain only the tissue transparency, an electronic circuit was built to detect a baseline input of the preservative medium prior to the measurement of corneal transparency. The operation of the system involves three steps: adjusting the "0 %" transmittance of the instrument, determining the "100 %" transmittance of the system, and finally measuring the transparency of the preserved cornea inside the storage medium. Fifty selected corneas were evaluated. Each cornea was submitted to three evaluation methods: subjective classification of transparency through a slit lamp, quantification of the transmittance of light using a corneal spectrophotometer previously developed, and measurement of transparency with the portable device. By comparing the three methods and using the expertise of eye bank trained personnel, a table for quantifying corneal transparency with the new device has been developed. The correlation factor between the corneal spectrophotometer and the new device is 0,99813, leading to a system that is able to standardize transparency measurements of preserved corneas, which is currently done subjectively.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Definitions. 1261.2 Section 1261.2 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS FEDERAL HOME LOAN BANK DIRECTORS Federal... guaranteed directorships and stock directorships. Method of equal proportions means the mathematical formula...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Definitions. 1261.2 Section 1261.2 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS FEDERAL HOME LOAN BANK DIRECTORS Federal... guaranteed directorships and stock directorships. Method of equal proportions means the mathematical formula...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Definitions. 1261.2 Section 1261.2 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS FEDERAL HOME LOAN BANK DIRECTORS Federal... guaranteed directorships and stock directorships. Method of equal proportions means the mathematical formula...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Definitions. 1261.2 Section 1261.2 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS FEDERAL HOME LOAN BANK DIRECTORS Federal... guaranteed directorships and stock directorships. Method of equal proportions means the mathematical formula...
Reversible wavelet filter banks with side informationless spatially adaptive low-pass filters
NASA Astrophysics Data System (ADS)
Abhayaratne, Charith
2011-07-01
Wavelet transforms that have an adaptive low-pass filter are useful in applications that require the signal singularities, sharp transitions, and image edges to be left intact in the low-pass signal. In scalable image coding, the spatial resolution scalability is achieved by reconstructing the low-pass signal subband, which corresponds to the desired resolution level, and discarding other high-frequency wavelet subbands. In such applications, it is vital to have low-pass subbands that are not affected by smoothing artifacts associated with low-pass filtering. We present the mathematical framework for achieving 1-D wavelet transforms that have a spatially adaptive low-pass filter (SALP) using the prediction-first lifting scheme. The adaptivity decisions are computed using the wavelet coefficients, and no bookkeeping is required for the perfect reconstruction. Then, 2-D wavelet transforms that have a spatially adaptive low-pass filter are designed by extending the 1-D SALP framework. Because the 2-D polyphase decompositions are used in this case, the 2-D adaptivity decisions are made nonseparable as opposed to the separable 2-D realization using 1-D transforms. We present examples using the 2-D 5/3 wavelet transform and their lossless image coding and scalable decoding performances in terms of quality and resolution scalability. The proposed 2-D-SALP scheme results in better performance compared to the existing adaptive update lifting schemes.
Capacitive detection of micromotions: Monitoring ballistics of a developing avian embryo
NASA Astrophysics Data System (ADS)
Szymanski, Jan A.; Pawlak, Krzysztof; Wasowicz, Pawel; Moscicki, Jozef K.
2002-09-01
An instrument for noninvasive monitoring of very weak biomechanical activities of small living organisms is described. The construction is sufficiently flexible to permit a range of studies including developing embryos of oviparous animals, pests that live in loose materials and timber, and insects that develop in cocoons. Motions are detected by monitoring a current generated by the fluctuating position of the object-loaded electrode of a capacitive sensor. To maximize the signal, oscillations of the electrode are mechanically enhanced and the current is amplified and filtered by a two-stage signal amplifier and a bank of six active Butterworth filters. The device is optimized to ballistocardiography of hen embryos. The sensitivity achieved makes possible quantitative studies of heart activity of 7-day-old embryos.
Report on: Connecticut River Streambank Erosion Study, Massachusetts, New Hampshire and Vermont
1979-11-01
Plastic filter cloths are used with considerable success beneath tiprap and other revetment materials such as articulated concrete blocks . The...rihutior unlimited II. SUPPLEMENTARY NOTES It. KEY WORDS (Continue on fever&e elde ifneceeeery and identify by block number). alluvial channel...erosion boat waves shear stress rock riprap lower bank erosion revetments flow control vegetation 20. ABSTRACT (Continue on reverse aide if neceesary and
Electromagnetic Counter-Counter Measure (ECCM) Techniques of the Digital Microwave Radio.
1982-05-01
Frequency hopping requires special synthesizers and filter banks. Large bandwidth expansion in a microwave radio relay application can best be achieved with...34 processing gain " performance as a function of jammer modulation type " pulse jammer performance • emission bandwidth and spectral shaping 0... spectral efficiency, implementation complexity, and suitability for ECCK techniques will be considered. A sumary of the requirements and characteristics of
High speed optical object recognition processor with massive holographic memory
NASA Technical Reports Server (NTRS)
Chao, T.; Zhou, H.; Reyes, G.
2002-01-01
Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.
Carter, J
1998-01-01
Collaboration among governments, private organizations, the World Bank, UN agencies, corporations, and the leaders and villagers of afflicted countries is producing substantial progress toward global eradication of many parasitic diseases. For example, there are now less than 100,000 cases of Guinea worm in the world--a 98% reduction. Strategies to prevent villagers from drinking infested water have included drilling deep wells, putting a nontoxic larvicide in the water, and straining the water through cloth filters. Both the larvicide and filters were provided free of charge to the eradication effort by US corporations. Similarly, a pharmaceutical company contributed 21.5 million free doses of mectizanr--a drug that prevents river blindness for a year--in the past year. Another pharmaceutical company donated albendazole for the global elimination of lymphatic filariasis. National pledges to a World Bank trust fund cover the costs of distributing donated medicines to the affected villages. The Common Agenda, a collaboration established between the US and Japan by the author, is an example of the potential of partnerships to create global political stability, correct environmental degradation, and promote the advantages of science and technology.
Nosé, Ricardo M; Daga, Fabio B; Nosé, Walton; Kasahara, Niro
2017-03-01
To evaluate the efficacy of mannitol solution as a decontamination agent on the chemical burn of the human corneas. Eight donor corneas from an eye bank were exposed to 25μl of 2.5% hydrofluoric acid (HF) solution on a filter paper for 20s. Three eyes were rinsed with 1000ml of mannitol 20% for 15min immediately after removal of the filter paper, 3 other were rinsed with sodium chloride (NaCl) 0.9% (1000ml for 15min) and two eyes were not rinsed. Microstructural changes were monitored in the time domain by optical coherence tomography (OCT) imaging for 75min. NaCl reduced the penetration depth to approximately half the thickness of the cornea at 15min; scattering within the anterior cornea was higher than that for the unrinsed eye. With mannitol, no increased scattering was observed in the posterior part of the corneal stroma within a time period of 1h after rinsing. OCT images revealed low-scattering intensity within the anterior stroma at the end of the rinsing period. In eye bank human corneas, mannitol proved to be an efficient agent to decontaminate HF burn. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Luque, Amalia; Gómez-Bellido, Jesús; Carrasco, Alejandro; Barbancho, Julio
2018-06-03
The analysis and classification of the sounds produced by certain animal species, notably anurans, have revealed these amphibians to be a potentially strong indicator of temperature fluctuations and therefore of the existence of climate change. Environmental monitoring systems using Wireless Sensor Networks are therefore of interest to obtain indicators of global warming. For the automatic classification of the sounds recorded on such systems, the proper representation of the sound spectrum is essential since it contains the information required for cataloguing anuran calls. The present paper focuses on this process of feature extraction by exploring three alternatives: the standardized MPEG-7, the Filter Bank Energy (FBE), and the Mel Frequency Cepstral Coefficients (MFCC). Moreover, various values for every option in the extraction of spectrum features have been considered. Throughout the paper, it is shown that representing the frame spectrum with pure FBE offers slightly worse results than using the MPEG-7 features. This performance can easily be increased, however, by rescaling the FBE in a double dimension: vertically, by taking the logarithm of the energies; and, horizontally, by applying mel scaling in the filter banks. On the other hand, representing the spectrum in the cepstral domain, as in MFCC, has shown additional marginal improvements in classification performance.
Maneechan, Witwisitpong; Kruttha, Phassawat; Prommi, Taeng On
2018-03-14
The immature and adult stages of Potamyia flavata Banks 1934 were sampled in seven sampling sites in streams of western Thailand. The samples were collected in February, May, and December 2015 using hand picking. A total of 2,133 individuals of larvae were collected. The larva and pupa of Po. flavata are described and illustrated. Larvae have five instars. The head capsule width of the first to the fifth instar larvae were 0.20-0.29, 0.30-0.39, 0.40-0.59, 0.60-0.79, and 0.80-1.15 mm, respectively. Gut content analysis revealed that larvae are omnivorous filterers. The guts of the larvae contained mainly diatoms and green algae followed by filamentous algae, detritus, and arthropod fragments.
Shao, Yu; Chang, Chip-Hong
2007-08-01
We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
Digital signal processing at Bell Labs-Foundations for speech and acoustics research
NASA Astrophysics Data System (ADS)
Rabiner, Lawrence R.
2004-05-01
Digital signal processing (DSP) is a fundamental tool for much of the research that has been carried out of Bell Labs in the areas of speech and acoustics research. The fundamental bases for DSP include the sampling theorem of Nyquist, the method for digitization of analog signals by Shannon et al., methods of spectral analysis by Tukey, the cepstrum by Bogert et al., and the FFT by Tukey (and Cooley of IBM). Essentially all of these early foundations of DSP came out of the Bell Labs Research Lab in the 1930s, 1940s, 1950s, and 1960s. This fundamental research was motivated by fundamental applications (mainly in the areas of speech, sonar, and acoustics) that led to novel design methods for digital filters (Kaiser, Golden, Rabiner, Schafer), spectrum analysis methods (Rabiner, Schafer, Allen, Crochiere), fast convolution methods based on the FFT (Helms, Bergland), and advanced digital systems used to implement telephony channel banks (Jackson, McDonald, Freeny, Tewksbury). This talk summarizes the key contributions to DSP made at Bell Labs, and illustrates how DSP was utilized in the areas of speech and acoustics research. It also shows the vast, worldwide impact of this DSP research on modern consumer electronics.
Marteinsdóttir, Bryndís
2014-01-01
Dispersal is an important factor in plant community assembly, but assembly studies seldom include information on actual dispersal into communities, i.e. the local propagule pool. The aim of this study was to determine which factors influence plant community assembly by focusing on two phases of the assembly process: the dispersal phase and the establishment phase. At 12 study sites in grazed ex-arable fields in Sweden the local plant community was determined and in a 100-m radius around the centre of each site, the regional species pool was measured. The local seed bank and the seed rain was explored to estimate the local propagule pool. Trait-based models were then applied to investigate if species traits (height, seed mass, clonal abilities, specific leaf area and dispersal method) and regional abundance influenced which species from the regional species pool, dispersed to the local community (dispersal phase) and which established (establishment phase). Filtering of species during the dispersal phase indicates the effect of seed limitation while filtering during the establishment phase indicates microsite limitation. On average 36% of the regional species pool dispersed to the local sites and of those 78% did establish. Species with enhanced dispersal abilities, e.g. higher regional abundance, smaller seeds and dispersed by cattle, were more likely to disperse to the sites than other species. At half the sites, dispersal was influenced by species height. Species establishment was however mainly unlinked to the traits included in this study. This study underlines the importance of seed limitation in local plant community assembly. It also suggests that without information on species dispersal into a site, it is difficult to distinguish between the influence of dispersal and establishment abilities, and thus seed and microsite limitation, as both can be linked to the same trait. PMID:25057815
Operational characteristics of a high voltage dense plasma focus
NASA Astrophysics Data System (ADS)
Woodall, D. M.
1985-11-01
A high voltage dense plasma focus powered by a single stage Marx bank was designed, built and operated. The maximum bank parameters are: voltage--120 kV, energy--20 kJ, short circuit current--600kA. The bank impedance is about 200 millohms. The plasma focus center electrode diameter is 1.27 cm. The outer electrode diameter is 10.16 cm. Rundown length is about 10 cm, corresponding to a bank quarter period of about 900 millohms ns. Rundown L is about 50 milliohms. The context of this work is established with a review of previous plasma focus theoretical, experimental and computational work and related topics. Theoretical motivation for high voltage operation is presented. The design, construction and operation of this device are discussed in detail. Results and analysis of measurements obtained are presented. Device operation was investigated primarily at 80 kV (9 kJ), with a gas fill of about 1 torr H2, plus 3-5 percent A. The following diagnostics were used: gun voltage and current measurements; filtered, time resolved x ray PIN measurements of the pinch region; time integrated x ray pinhole photographs of the pinch region; fast frame visible light photographs of the sheath during rundown; and B probe measurements of the current sheath shortly before collapse.
NASA Astrophysics Data System (ADS)
Cartwright, I.; Gilfedder, B.; Hofmann, H.
2014-01-01
This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The joint use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
RF MEMS and Their Applications in NASA's Space Communication Systems
NASA Technical Reports Server (NTRS)
Williams, W. Daniel; Ponchak, George E.; Simons, Rainee N.; Zaman, Afroz; Kory, Carol; Wintucky, Edwin; Wilson, Jeffrey D.; Scardelletti, Maximilian; Lee, Richard; Nguyen, Hung
2001-01-01
Radio frequency (RF) and microwave communication systems rely on frequency, amplitude, and phase control circuits to efficiently use the available spectrum. Phase control circuits are required for electronically scanning phase array antennas that enable radiation pattern shaping, scanning, and hopping. Two types of phase shifters, which are the phase control circuits, are most often used. The first is comprised of two circuits with different phase characteristics such as two transmission lines of different lengths or a high pass and low pass filter and a switch that directs the RF power through one of the two circuits. Alternatively, a variable capacitor, or varactor, is used to change the effective electrical path length of a transmission line, which changes the phase characteristics. Filter banks are required for the diplexer at the front end of wide band communication satellites. These filters greatly increase the size and mass of the RF/microwave systems, but smaller diplexers may be made with a low loss varactor or a group of capacitors, a switch and an inductor.
Comparative studies on the retardation and reduction of glyphosate during subsurface passage.
Litz, N T; Weigert, A; Krause, B; Heise, S; Grützmacher, G
2011-05-01
The herbicide Glyphosate was detected in River Havel (Berlin, Germany) in concentrations between 0.1 and 2 μg/L (single maximum outlier: 5 μg/L). As the river indirectly acts as drinking water source for the city's 3.4 Mio inhabitants potential risks for drinking water production needed to be assessed. For this reason laboratory (sorption and degradation studies) and technical scale investigations (bank filtration and slow sand filter experiments) were carried out. Batch adsorption experiments with Glyphosate yielded a low K(F) of 1.89 (1/n = 0.48) for concentrations between 0.1 and 100 mg/L. Degradation experiments at 8 °C with oxygen limitation resulted in a decrease of Glyphosate concentrations in the liquid phase probably due to slow adsorption (half life: 30 days). During technical scale slow sand filter (SSF) experiments Glyphosate attenuation was 70-80% for constant inlet concentrations of 0.7, 3.5 and 11.6 μg/L, respectively. Relevant retardation of Glyphosate breakthrough was observed despite the low adsorption potential of the sandy filter substrate and the relatively high flow velocity. The VisualCXTFit model was applied with data from typical Berlin bank filtration sites to extrapolate the results to a realistic field setting and yielded sufficient attenuation within a few days of travel time. Experiments on an SSF planted with Phragmites australis and an unplanted SSF with mainly vertical flow conditions to which Glyphosate was continuously dosed showed that in the planted SSF Glyphosate retardation exceeds 54% compared to 14% retardation in the unplanted SSF. The results show that saturated subsurface passage has the potential to efficiently attenuate glyphosate, favorably with aerobic conditions, long travel times and the presence of planted riparian boundary buffer strips. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.
2017-03-01
This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.
Texture analysis based on the Hermite transform for image classification and segmentation
NASA Astrophysics Data System (ADS)
Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus
2012-06-01
Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
26 CFR 1.585-6 - Recapture method of changing from the reserve method of section 585.
Code of Federal Regulations, 2010 CFR
2010-04-01
... method of section 585. 1.585-6 Section 1.585-6 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Banking Institutions § 1.585... adjustment. (2) Cessation of banking business—(i) In general. If a bank to which this section applies ceases...
Sanz, Susana
2017-01-01
In this article, a Linear Quadratic Regulator (LQR) lateral stability and rollover controller has been developed including as the main novelty taking into account the road bank angle and using exclusively active suspension for both lateral stability and rollover control. The main problem regarding the road bank is that it cannot be measured by means of on-board sensors. The solution proposed in this article is performing an estimation of this variable using a Kalman filter. In this way, it is possible to distinguish between the road disturbance component and the vehicle’s roll angle. The controller’s effectiveness has been tested by means of simulations carried out in TruckSim, using an experimentally-validated vehicle model. Lateral load transfer, roll angle, yaw rate and sideslip angle have been analyzed in order to quantify the improvements achieved on the behavior of the vehicle. For that purpose, these variables have been compared with the results obtained from both a vehicle that uses passive suspension and a vehicle using a fuzzy logic controller. PMID:29027910
,
1979-01-01
Available are four multichannel profiles collected by Digicon Geophysical Corporation in 1975 using a 48-channel streamer (3600 m long), and a 27.9 cubic liter airgun array. They were processed in Denver on the Phoenix "I" by William C. Petterson. The processing includes demultiplexing and resampling, geometry and common-depth-point, definition, velocity analysis noise muting, band-pass filtering, time-variant filtering, time-variant deconvolution, and automatic gain control (AGC) sealing, prior to the final profile playout.The release includes pares of or all of four lines off the eastern United States (see map) over the Georges Bank basin and the Long Island platform and upper continental rise. These profiles were collected as a part of a regional arid over the offshore Atlantic sedimentary basins as a part of a continuing program to assess the resource potential using non-proprietary data. Lines 7 and 8 are cross-shelf profiles (280 ken and 400 km long respectively) taken across Georges Basin George Bank, the continental slope and rise east of Massachusetts. A five kilometer gap exists in line 8C near the outer edge of the shelf because the high concentration of lobster pots there prevented a continous traverse through the area. Line 12 (shotpoint 5988 - 14380, parts E, F, G, H, I and J) is along-the-shelf profile, and stretches from the vicinity of Husdon Channel (mid-shelf east of New Jersey) to Browns Bank, 100 km southeast of Nova Scotia, terminating in the vicinity of the Shell Mohawk (B-93) hole. Line 13 (shotpoint 83-11295, 1120 km) traverse the upper Continental rise between Cape Hatteras and Georges Bank.These profiles including velocity scans and shot point maps, may be viewed at U. S. Geological Survey Office, Bldg. B, Quissett Campus, Woods Hole, MA., and U. S. Geological Survey Office, Bldg. 25 at the Denver Federal Center. Copies of maps, scans and profiles can be purchased from the National Geophysical Solar-Terrestrial Data center, Environmental Data Service - NOAA, Code D 621, Boulder, Co. 80303.
Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.
Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan
2015-01-01
Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.
Biologically inspired circuitry that mimics mammalian hearing
NASA Astrophysics Data System (ADS)
Hubbard, Allyn; Cohen, Howard; Karl, Christian; Freedman, David; Mountain, David; Ziph-Schatzberg, Leah; Nourzad Karl, Marianne; Kelsall, Sarah; Gore, Tyler; Pu, Yirong; Yang, Zibing; Xing, Xinyu; Deligeorges, Socrates
2009-05-01
We are developing low-power microcircuitry that implements classification and direction finding systems of very small size and small acoustic aperture. Our approach was inspired by the fact that small mammals are able to localize sounds despite their ears may be separated by as little as a centimeter. Gerbils, in particular are good low-frequency localizers, which is a particularly difficult task, since a wavelength at 500 Hz is on the order of two feet. Given such signals, crosscorrelation- based methods to determine direction fail badly in the presence of a small amount of noise, e.g. wind noise and noise clutter common to almost any realistic environment. Circuits are being developed using both analog and digital techniques, each of which process signals in fundamentally the same way the peripheral auditory system of mammals processes sound. A filter bank represents filtering done by the cochlea. The auditory nerve is implemented using a combination of an envelope detector, an automatic gain stage, and a unique one-bit A/D, which creates what amounts to a neural impulse. These impulses are used to extract pitch characteristics, which we use to classify sounds such as vehicles, small and large weaponry from AK-47s to 155mm cannon, including mortar launches and impacts. In addition to the pitchograms, we also use neural nets for classification.
Detection of Fast Moving and Accelerating Targets Compensating Range and Doppler Migration
2014-06-01
Radon -Fourier transform has been introduced to realize long- term coherent integration of the moving targets with range migration [8, 9]. Radon ...2010) Long-time coherent integration for radar target detection base on Radon -Fourier transform, in Proceedings of the IEEE Radar Conference, pp...432–436. 9. Xu, J., Yu, J., Peng, Y. & Xia, X. (2011) Radon -Fourier transform for radar target detection, I: Generalized Doppler filter bank, IEEE
Retained energy-based coding for EEG signals.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-09-01
The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Detection of Pigment Networks in Dermoscopy Images
NASA Astrophysics Data System (ADS)
Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui
2017-02-01
One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.
Cardone, Franco; Sowemimo-Coker, Samuel; Abdel-Haq, Hanin; Sbriccoli, Marco; Graziano, Silvia; Valanzano, Angelina; Berardi, Vito Angelo; Galeno, Roberta; Puopolo, Maria; Pocchiari, Maurizio
2014-04-01
The safety of red blood cells (RBCs) is of concern because of the occurrence of four transfusion-transmitted variant Creutzfeldt-Jakob disease (vCJD) cases in the United Kingdom. The absence of validated screening tests requires the use of procedures to remove prions from blood to minimize the risk of transmission. These procedures must be validated using infectious prions in a form that is as close as possible to one in blood. Units of human whole blood (WB) and RBCs were spiked with high-speed supernatants of 263K scrapie-infected hamster brain homogenates. Spiked samples were leukoreduced and then passed through prion-removing filters (Pall Corporation). In another experiment, RBCs from 263K scrapie-infected hamsters were treated as above, and residual infectivity was measured by bioassay. The overall removal of infectivity by the filters from prion-spiked WB and RBCs was approximately two orders of magnitude. No infectivity was detected in filtered hamster RBCs endogenously infected with scrapie. The use of prion-removing filters may help to reduce the risk of transfusion-transmitted vCJD. To avoid overestimation of prion removal efficiency in validation studies, it may be more appropriate to use supernates from ultracentrifugation of scrapie-infected hamster brain homogenate rather than the current standard brain homogenates. © 2013 American Association of Blood Banks.
Microscopy with spatial filtering for sorting particles and monitoring subcellular morphology
NASA Astrophysics Data System (ADS)
Zheng, Jing-Yi; Qian, Zhen; Pasternack, Robert M.; Boustany, Nada N.
2009-02-01
Optical scatter imaging (OSI) was developed to non-invasively track real-time changes in particle morphology with submicron sensitivity in situ without exogenous labeling, cell fixing, or organelle isolation. For spherical particles, the intensity ratio of wide-to-narrow angle scatter (OSIR, Optical Scatter Image Ratio) was shown to decrease monotonically with diameter and agree with Mie theory. In living cells, we recently reported this technique is able to detect mitochondrial morphological alterations, which were mediated by the Bcl-xL transmembrane domain, and could not be observed by fluorescence or differential interference contrast images. Here we further extend the ability of morphology assessment by adopting a digital micromirror device (DMD) for Fourier filtering. When placed in the Fourier plane the DMD can be used to select scattering intensities at desired combination of scattering angles. We designed an optical filter bank consisting of Gabor-like filters with various scales and rotations based on Gabor filters, which have been widely used for localization of spatial and frequency information in digital images and texture analysis. Using a model system consisting of mixtures of polystyrene spheres and bacteria, we show how this system can be used to sort particles on a microscopic slide based on their size, orientation and aspect ratio. We are currently applying this technique to characterize the morphology of subcellular organelles to help understand fundamental biological processes.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Shreya; de, Sunil Kumar
2014-05-01
In the present paper an attempt has been made to propose RS-GIS based method for erosion vulnerability zonation for the entire river based on simple techniques that requires very less field investigation. This method consist of 8 parameters, such as, rainfall erosivity, lithological factor, bank slope, meander index, river gradient, soil erosivity, vegetation cover and anthropogenic impact. Meteorological data, GSI maps, LISS III (30m resolution), SRTM DEM (56m resolution) and Google Images have been used to determine rainfall erosivity, lithological factor, bank slope, meander index, river gradient, vegetation cover and anthropogenic impact; Soil map of the NBSSLP, India has been used for assessing Soil Erosivity index. By integrating the individual values of those six parameters (the 1st two parameters are remained constant for this particular study area) a bank erosion vulnerability zonation map of the River Haora, Tripura, India (23°37' - 23°53'N and 91°15'-91°37'E) has been prepared. The values have been compared with the existing BEHI-NBS method of 60 spots and also with field data of 30 cross sections (covering the 60 spots) taken along 51 km stretch of the river in Indian Territory and found that the estimated values are matching with the existing method as well as with field data. The whole stretch has been divided into 5 hazard zones, i.e. Very High, High, Moderate, Low and Very Low Hazard Zones and they are covering 5.66 km, 16.81 km, 40.82km, 29.67 km and 9.04 km respectively. KEY WORDS: Bank erosion, Bank Erosion Hazard Index (BEHI), Near Bank Stress (NBS), Erosivity, Bank Erosion Vulnerability Zonation.
5. (Credit LSU) The NcNeil Street Station from Douglas Island, ...
5. (Credit LSU) The NcNeil Street Station from Douglas Island, across Cross Bayou, c1907. Note the enlarged wood-framed filter wing on the left; the coal shed on the right; and the low service auxiliary pump house on tracks on the incline on the bank leading down to Cross Bayou. From: Louisiana State University, Shreveport Archives post card collection) - McNeil Street Pumping Station, McNeil Street & Cross Bayou, Shreveport, Caddo Parish, LA
Polarization observations of four southern pulsars at 1560 MHz
NASA Astrophysics Data System (ADS)
Wu, Xin-Ji; Manchester, R. N.; Lyne, A. G.
1991-12-01
Some interesting results from the mean pulse polarization observations of four southern pulsars made at the Australian National Radio Astronomy Observatory, Parkes, using the 64-m telescope in June and July, 1988, are presented. The 2 x 16 x 5 MHz filter system from Jodrell Bank has proved excellent in dedispersing the pulse signals and measuring their polarization properties. Data for the four pulsars are given in some detail, and their spectral behavior is discussed.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
A reprogrammable receiver architecture for wireless signal interception
NASA Astrophysics Data System (ADS)
Yao, Timothy S.
2003-09-01
In this paper, a re-programmable receiver architecture, based on software-defined-radio concept, for wireless signal interception is presented. The radio-frequency (RF) signal that the receiver would like to intercept may come from a terrestrial cellular network or communication satellites, which their carrier frequency are in the range from 800 MHz (civilian mobile) to 15 GHz (Ku band). To intercept signals from such a wide range of frequency in these variant communication systems, the traditional way is to deploy multiple receivers to scan and detect the desired signal. This traditional approach is obviously unattractive due to the cost, efficiency, and accuracy. Instead, we propose a universal receiver, which is software-driven and re-configurable, to intercept signals of interest. The software-defined-radio based receiver first intercepts RF energy of wide spectrum (25MHz) through antenna, performs zero-IF down conversion (homodyne architecture) to baseband, and digital channelizes the baseband signal. The channelization module is a bank of high performance digital filters. The bandwidth of the filter bank is programmable according to the wireless communication protocol under watch. In the baseband processing, high-performance digital signal processors carry out the detection process and microprocessors handle the communication protocols. The baseband processing is also re-configurable for different wireless standards and protocol. The advantages of the software-defined-radio architecture over traditional RF receiver make it a favorable technology for the communication signal interception and surveillance.
A scalable, micropore, platelet rich plasma separation device.
Dickson, Mary Nora; Amar, Levy; Hill, Michael; Schwartz, Joseph; Leonard, Edward F
2012-12-01
We have designed a novel, low energy platelet-rich-plasma (PRP) separator capable of producing 50 mL of PRP in 30 min, intended for military and emergency applications. Blood flows over a 3 mm length of sieve at high rates of shear. A plasma-platelet filtrate passes through the sieve's pores while erythrocytes remain. The filtrate is flowed over a second 3 mm length of smaller-pored sieve that withdraws plasma. Bulk blood volume is maintained by returning platelet-free plasma to the erythrocyte pool, enabling a nearly complete multi-pass platelet extraction. The total percentage of platelets extracted is:θ(T)=1-exp (-V(f)(T)Φ(P)/V) where V is the original plasma volume, V ( f )(T) is the total filtered volume, and ϕ ( P ) is platelet passage ratio (filtrate concentration/bulk average concentration) taken to be constant. Maximum θ(T) occurs at maximum V ( f )(T)× ϕ ( P ) Test microsieves, 3 mm long × 3 mm wide, were used. ϕ ( P ) values measured at various filtrate flow rates (20-100 uL/min) and utilizing various filter pore sizes (1.2-3.5 μm), was as high as 150 %. Maximum V ( f )(T)× ϕ ( P ) was achieved utilizing the 3.5 um filters at the highest flow rate, 100 uL/min. Erythrocyte leakages were always below 2,000/uL, far below the allowable limit stipulated by the American Association of Blood Banking. These data imply that a 13.7 cm(2) filter area is sufficient to achieve the target separation of 50 mL of platelet concentrate in 30 min. The filtration cartridge would consist of multiple microporous strips of 3 mm width arranged in parallel so that each element would see the conditions used in the prototype experiments presented here. Other microfiltration schemes suggest no method of scaling to practical levels.
NASA Astrophysics Data System (ADS)
Yu, Qifeng; Liu, Xiaolin; Sun, Xiangyi
1998-07-01
Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOM s). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Replacement of filters for respirable quartz measurement in coal mine dust by infrared spectroscopy.
Farcas, Daniel; Lee, Taekhee; Chisholm, William P; Soo, Jhy-Charm; Harper, Martin
2016-01-01
The objective of this article is to compare and characterize nylon, polypropylene (PP), and polyvinyl chloride (PVC) membrane filters that might be used to replace the vinyl/acrylic co-polymer (DM-450) filter currently used in the Mine Safety and Health Administration (MSHA) P-7 method (Quartz Analytical Method) and the National Institute for Occupational Safety and Health (NIOSH) Manual of Analytical Methods 7603 method (QUARTZ in coal mine dust, by IR re-deposition). This effort is necessary because the DM-450 filters are no longer commercially available. There is an impending shortage of DM-450 filters. For example, the MSHA Pittsburgh laboratory alone analyzes annually approximately 15,000 samples according to the MSHA P-7 method that requires DM-450 filters. Membrane filters suitable for on-filter analysis should have high infrared (IR) transmittance in the spectral region 600-1000 cm(-1). Nylon (47 mm, 0.45 µm pore size), PP (47 mm, 0.45 µm pore size), and PVC (47 mm, 5 µm pore size) filters meet this specification. Limits of detection and limits of quantification were determined from Fourier transform infrared spectroscopy (FTIR) measurements of blank filters. The average measured quartz mass and coefficient of variation were determined from test filters spiked with respirable α-quartz following MSHA P-7 and NIOSH 7603 methods. Quartz was also quantified in samples of respirable coal dust on each test filter type using the MSHA and NIOSH analysis methods. The results indicate that PP and PVC filters may replace the DM-450 filters for quartz measurement in coal dust by FTIR. PVC filters of 5 µm pore size seemed to be suitable replacement although their ability to retain small particulates should be checked by further experiment.
The ethics of donor human milk banking.
Arnold, Lois D W
2006-01-01
This case study of donor human milk banking and the ethics that govern interested parties is the first time the ethics of donor milk banking has been explored. Two different models of ethics and their direct impact on donor milk banking are examined: biomedical ethics and public health ethics. How these models and principles affect different aspects of donor human milk banking and the parties involved in the delivery of this service are elucidated. Interactions of parties with each other and how the quality and type of interaction affects the ethical delivery of donor milk banking services are described. Crystallization is at the heart of the qualitative methodology used. Writing as a method of inquiry, an integrative research review, and personal experience are the three methods involved in the crystallization process. Suggestions are made for improving access and knowledge of banked donor human milk, a valuable public health resource.
Supporting Informal Learning by Traders in Investment Banks
ERIC Educational Resources Information Center
Chivers, Geoffrey
2011-01-01
Purpose: The main aims of this paper are to determine the extent to which experienced traders in investment banks based in London are learning by informal methods, which methods are to the fore, and whether HRD staff are providing support for informal learning. It also seeks to find evidence that such investment banks were attempting to become…
de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter
2017-11-01
Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Technical Reports Server (NTRS)
Grumet, A.
1981-01-01
An automatic correlation plane processor that can rapidly acquire, identify, and locate the autocorrelation outputs of a bank of multiple optical matched filters is described. The read-only memory (ROM) stored digital silhouette of each image associated with each matched filter allows TV video to be used to collect image energy to provide accurate normalization of autocorrelations. The resulting normalized autocorrelations are independent of the illumination of the matched input. Deviation from unity of a normalized correlation can be used as a confidence measure of correct image identification. Analog preprocessing circuits permit digital conversion and random access memory (RAM) storage of those video signals with the correct amplitude, pulse width, rising slope, and falling slope. TV synchronized addressing of 3 RAMs permits on-line storage of: (1) the maximum unnormalized amplitude, (2) the image x location, and (3) the image y location of the output of each of up to 99 matched filters. A fourth RAM stores all normalized correlations. A normalization approach, normalization for cross correlations, a system's description with block diagrams, and system's applications are discussed.
12 CFR 618.8000 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Definitions. 618.8000 Section 618.8000 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM GENERAL PROVISIONS Related Services § 618... means the method or procedures used to deliver a related service. This distinguishes the particulars of...
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
The Fourier decomposition method for nonlinear and non-stationary time series analysis
Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-01-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352
A stable systemic risk ranking in China's banking sector: Based on principal component analysis
NASA Astrophysics Data System (ADS)
Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing
2018-02-01
In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.
Safety of long-term subcutaneous free flap skin banking after skin-sparing mastectomy
Verstappen, Ralph; Djedovic, Gabriel; Morandi, Evi Maria; Heiser, Dietmar; Rieger, Ulrich Michael; Bauer, Thomas
2018-01-01
Background A persistent problem in autologous breast reconstruction in skin-sparing mastectomies is skin restoration after skin necrosis or secondary oncological resection. As a solution to facilitate reconstruction, skin banking of free-flap skin has been proposed in cases where the overlying skin envelope must be resected, as this technique spares the patient an additional donor site. Herein, we present the largest series to date in which this method was used. We investigated its safety and the possibility of skin banking for prolonged periods of time. Methods All skin-sparing mastectomies and immediate autologous breast reconstructions from December 2009 until June 2013 at our institution were analysed. Results We identified 31 patients who underwent 33 free flap reconstructions in which skin banking was performed. Our median skin banking period was 7 days, with a maximum duration of 171 days. In 22.5% of cases, the banked skin was used to reconstruct overlying skin defects, and in 9.6% of cases to reconstruct the nipple-areolar complex. Microbiological and histological investigations of the banked skin revealed neither clinical infections nor malignancies. Conclusions In situ skin banking, even for prolonged periods of time, is a safe and cost-effective method to ensure that skin defects due to necrosis or secondary oncological resection can be easily reconstructed. PMID:29506331
Cheng, Yuk Wah; Wilkinson, Jenny M
2015-08-01
This paper reports on an evaluation of the introduction of a blood bank automation system (Ortho AutoVue(®) Innova) in a hospital blood bank by considering the performance and workflow as compared with manual methods. The turnaround time was found to be 45% faster than the manual method. The concordance rate was found to be 100% for both ABO/Rh(D) typing and antibody screening in both of the systems and there was no significant difference in detection sensitivity for clinically significant antibodies. The Ortho AutoVue(®) Innova automated blood banking system streamlined the routine pre-transfusion testing in hospital blood bank with high throughput, equivalent sensitivity and reliability as compared with conventional manual method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Argus: a 16-pixel millimeter-wave spectrometer for the Green Bank Telescope
NASA Astrophysics Data System (ADS)
Sieth, Matthew; Devaraj, Kiruthika; Voll, Patricia; Church, Sarah; Gawande, Rohit; Cleary, Kieran; Readhead, Anthony C. S.; Kangaslahti, Pekka; Samoska, Lorene; Gaier, Todd; Goldsmith, Paul F.; Harris, Andrew I.; Gundersen, Joshua O.; Frayer, David; White, Steve; Egan, Dennis; Reeves, Rodrigo
2014-07-01
We report on the development of Argus, a 16-pixel spectrometer, which will enable fast astronomical imaging over the 85-116 GHz band. Each pixel includes a compact heterodyne receiver module, which integrates two InP MMIC low-noise amplifiers, a coupled-line bandpass filter and a sub-harmonic Schottky diode mixer. The receiver signals are routed to and from the multi-chip MMIC modules with multilayer high frequency printed circuit boards, which includes LO splitters and IF amplifiers. Microstrip lines on flexible circuitry are used to transport signals between temperature stages. The spectrometer frontend is designed to be scalable, so that the array design can be reconfigured for future instruments with hundreds of pixels. Argus is scheduled to be commissioned at the Robert C. Byrd Green Bank Telescope in late 2014. Preliminary data for the first Argus pixels are presented.
Collier, James H; Lesk, Arthur M; Garcia de la Banda, Maria; Konagurthu, Arun S
2012-07-01
Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80,500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super.
Structural Isosteres of Phosphate Groups in the Protein Data Bank.
Zhang, Yuezhou; Borrel, Alexandre; Ghemtio, Leo; Regad, Leslie; Boije Af Gennäs, Gustav; Camproux, Anne-Claude; Yli-Kauhaluoma, Jari; Xhaard, Henri
2017-03-27
We developed a computational workflow to mine the Protein Data Bank for isosteric replacements that exist in different binding site environments but have not necessarily been identified and exploited in compound design. Taking phosphate groups as examples, the workflow was used to construct 157 data sets, each composed of a reference protein complexed with AMP, ADP, ATP, or pyrophosphate as well other ligands. Phosphate binding sites appear to have a high hydration content and large size, resulting in U-shaped bioactive conformations recurrently found across unrelated protein families. A total of 16 413 replacements were extracted, filtered for a significant structural overlap on phosphate groups, and sorted according to their SMILES codes. In addition to the classical isosteres of phosphate, such as carboxylate, sulfone, or sulfonamide, unexpected replacements that do not conserve charge or polarity, such as aryl, aliphatic, or positively charged groups, were found.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
24. Station Oil Tanks, view to the south. The four ...
24. Station Oil Tanks, view to the south. The four oil storage tanks located along the east wall (left side of photograph) are, from foreground to background: dirty transformer oil tank, clean transformer oil tank, dirty lubricating oil tank, and clean lubricating oil tank. An oil filter system is also visible in background along the far wall. - Washington Water Power Clark Fork River Noxon Rapids Hydroelectric Development, Powerhouse, South bank of Clark Fork River at Noxon Rapids, Noxon, Sanders County, MT
VizieR Online Data Catalog: Absolute polarimetry observations of 33 pulsars (Force+, 2015)
NASA Astrophysics Data System (ADS)
Force, M. M.; Demorest, P.; Rankin, J. M.
2017-11-01
The observations were carried out in the summer of 2011 using the 100-m Robert C. Byrd GBT and the Green Bank Ultimate Pulsar Processing Instrument (GUPPI) in coherent filterbank mode. Full-Stokes spectra were acquired in an 800 MHz bandwidth centred at 1500 MHz radio frequency; the ~1200-1300 MHz airport radar analogue filter was used, resulting in a ~700 MHz effective bandwidth. The filterbank frequency resolution was 1.5 MHz, or 512 channels across the full band. (2 data files).
Hierarchical image coding with diamond-shaped sub-bands
NASA Technical Reports Server (NTRS)
Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken
1992-01-01
We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.
Method of treating contaminated HEPA filter media in pulp process
Hu, Jian S.; Argyle, Mark D.; Demmer, Ricky L.; Mondok, Emilio P.
2003-07-29
A method for reducing contamination of HEPA filters with radioactive and/or hazardous materials is described. The method includes pre-processing of the filter for removing loose particles. Next, the filter medium is removed from the housing, and the housing is decontaminated. Finally, the filter medium is processed as pulp for removing contaminated particles by physical and/or chemical methods, including gravity, flotation, and dissolution of the particles. The decontaminated filter medium is then disposed of as non-RCRA waste; the particles are collected, stabilized, and disposed of according to well known methods of handling such materials; and the liquid medium in which the pulp was processed is recycled.
Developing Topic-Specific Search Filters for PubMed with Click-Through Data
Li, Jiao; Lu, Zhiyong
2013-01-01
Summary Objectives Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. Methods We present an automated method to develop topic-specific filters on the basis of users’ search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. Results We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Conclusion Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/ PMID:23666447
Cigarette characteristic and emission variations across high-, middle- and low-income countries.
O'Connor, R J; Wilkins, K J; Caruso, R V; Cummings, K M; Kozlowski, L T
2010-12-01
The public health burden of tobacco use is shifting to the developing world, and the tobacco industry may apply some of its successful marketing tactics, such as allaying health concerns with product modifications. This study used standard smoking machine tests to examine the extent to which the industry is introducing engineering features that reduce tar and nicotine to cigarettes sold in middle- and low-income countries. Multicountry observational study. Cigarettes from 10 different countries were purchased in 2005 and 2007 with low-, middle- and high-income countries identified using the World Bank's per capita gross national income metric. Physical measurements of each brand were tested, and tobacco moisture and weight, paper porosity, filter ventilation and pressure drop were analysed. Tar, nicotine and carbon monoxide emission levels were determined for each brand using International Organization for Standardization and Canadian Intensive methods. Statistical analyses were performed using Statistical Package for the Social Sciences. Among cigarette brands with filters, more brands were ventilated in high-income countries compared with middle- and low-income countries [χ(2)(4)=25.92, P<0.001]. Low-income brands differed from high- and middle-income brands in engineering features such as filter density, ventilation and paper porosity, while tobacco weight and density measures separated the middle- and high-income groups. Smoke emissions differed across income groups, but these differences were largely negated when one accounted for design features. This study showed that as a country's income level increases, cigarettes become more highly engineered and the emissions levels decrease. In order to reduce the burden of tobacco-related disease and further effective product regulation, health officials must understand cigarette design and function within and between countries. Copyright © 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Rudner, Lawrence
This digest discusses the advantages and disadvantages of using item banks, and it provides useful information for those who are considering implementing an item banking project in their school districts. The primary advantage of item banking is in test development. Using an item response theory method, such as the Rasch model, items from multiple…
Lyu, Weiwei; Cheng, Xianghong
2017-11-28
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.
Microwave active filters based on coupled negative resistance method
NASA Astrophysics Data System (ADS)
Chang, Chi-Yang; Itoh, Tatsuo
1990-12-01
A novel coupled negative resistance method for building a microwave active bandpass filter is introduced. Based on this method, four microstrip line end-coupled filters were built. Two are fixed-frequency one-pole and two-pole filters, and two are tunable one-pole and two-pole filters. In order to broaden the bandwidth of the end-coupled filter, a modified end-coupled structure is proposed. Using the modified structure, an active filter with a bandwidth up to 7.5 percent was built. All of the filters show significant passband performance improvement. Specifically, the passband bandwidth was broadened by a factor of 5 to 20.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
NASA Astrophysics Data System (ADS)
Li, Chengwei; Zhan, Liwei
2015-12-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.
The Power Plant Operating Data Based on Real-time Digital Filtration Technology
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie
2018-03-01
Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.
Synthesis of Nanosilver Particles in the Texture of Bank Notes to Produce Antibacterial Effect
NASA Astrophysics Data System (ADS)
Lari, Mohammad Hossein Asadi; Esmaili, Vahid; Naghavi, Seyed Mohammad Ebrahim; Kimiaghalam, Amir Hossein; Sharifaskari, Emadaldin
Silver particles show antibacterial and antiseptic properties at the nanoscale. Such properties result from an alteration in the binding capacity of silver atoms in bits of less than 6.5nm which enables them to kill harmful organisms. Silver nanoparticles are now the most broadly used agents in the area of nanotechnology after carbon nanotubes. Given that currency bills are one of the major sources of bacterial disseminations and their contamination has recently been nominated as a critical factor in gastrointestinal infections and possibly colon cancers, here we propose a new method for producing antibacterial bank notes by using silver nanoparticles. Older bank notes are sprayed with acetone to clean the surface. The bank note is put into a petri-dish containing a solution of silver nitrate and ammonia so that it is impregnated. The bank notes are then reduced with the formaldehyde gas, which penetrates its texture and produces silver nanoparticles in the cellulose matrix. The side products of the reactions are quickly dried off and the procedure ends with the drying of the bank note. The transmission electron microscope (TEM) images confirmed the nanoscale size range for the formed particles while spectroscopy methods, such as XRD, provided proof for the metallic nature of the particles. Bacterial challenge tests then showed that no colonies of the three tested bacterium (Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa survived on the sample after a 72h incubation period. This study has provided a method for synthesizing silver NPs directly into the texture of fabrics and textiles (like that of bank notes) which can result in lower production costs, making the use of silver NPs economically beneficial. The method, specifically works on the fabric of bank notes, suggesting a method to tackle the transmission of bacteria through bank notes. Moreover, this study is a testament to the strong antibacterial nature of even low concentrations of silver NPs.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
NASA Astrophysics Data System (ADS)
Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis
1997-11-01
Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.
The development rainfall forecasting using kalman filter
NASA Astrophysics Data System (ADS)
Zulfi, Mohammad; Hasan, Moh.; Dwidja Purnomo, Kosala
2018-04-01
Rainfall forecasting is very interesting for agricultural planing. Rainfall information is useful to make decisions about the plan planting certain commodities. In this studies, the rainfall forecasting by ARIMA and Kalman Filter method. Kalman Filter method is used to declare a time series model of which is shown in the form of linear state space to determine the future forecast. This method used a recursive solution to minimize error. The rainfall data in this research clustered by K-means clustering. Implementation of Kalman Filter method is for modelling and forecasting rainfall in each cluster. We used ARIMA (p,d,q) to construct a state space for KalmanFilter model. So, we have four group of the data and one model in each group. In conclusions, Kalman Filter method is better than ARIMA model for rainfall forecasting in each group. It can be showed from error of Kalman Filter method that smaller than error of ARIMA model.
Increasing donations to supermarket food-bank bins using proximal prompts.
Farrimond, Samantha J; Leland, Louis S
2006-01-01
There has been little research into interventions to increase participation in donating items to food-bank bins. In New Zealand, there has been an increased demand from food banks (Stewart, 2002). This study demonstrated that point-of-sale prompts can be an effective method of increasing donations to a supermarket food-bank bin.
Developing topic-specific search filters for PubMed with click-through data.
Li, J; Lu, Z
2013-01-01
Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. We present an automated method to develop topic-specific filters on the basis of users' search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
12 CFR 40.7 - Form of opt out notice to consumers; opt out methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Form of opt out notice to consumers; opt out... PRIVACY OF CONSUMER FINANCIAL INFORMATION Privacy and Opt Out Notices § 40.7 Form of opt out notice to consumers; opt out methods. (a) (1) Form of opt out notice. If a bank is required to provide an opt out...
12 CFR 40.7 - Form of opt out notice to consumers; opt out methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Form of opt out notice to consumers; opt out... PRIVACY OF CONSUMER FINANCIAL INFORMATION Privacy and Opt Out Notices § 40.7 Form of opt out notice to consumers; opt out methods. (a) (1) Form of opt out notice. If a bank is required to provide an opt out...
Attacking of SmartCard-Based Banking Applications with JavaScript-Based Rootkits
NASA Astrophysics Data System (ADS)
Bußmeyer, Daniel; Gröbert, Felix; Schwenk, Jörg; Wegener, Christoph
Due to recent attacks on online banking systems and consequent soaring losses through fraud, different methods have been developed to ensure a secure connection between a bank and its customers. One method is the inclusion of smart card readers into these schemes, which come along with different benefits, e.g., convenience and costs, and endangerments, especially on the security side.
12 CFR 327.9 - Assessment pricing methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Assessment pricing methods. 327.9 Section 327.9... ASSESSMENTS In General § 327.9 Assessment pricing methods. (a) Small institutions—(1) Risk Categories. Each... unless effective corrective action is taken. (4) Financial ratios method. A small insured depository...
Epibenthic communities of sedimentary habitats in a NE Atlantic deep seamount (Galicia Bank)
NASA Astrophysics Data System (ADS)
Serrano, A.; Cartes, J. E.; Papiol, V.; Punzón, A.; García-Alegre, A.; Arronte, J. C.; Ríos, P.; Lourido, A.; Frutos, I.; Blanco, M.
2017-12-01
Galicia Bank is a deep seamount included as Site of Community Importance (SCI) in the Spanish Natura 2000 Network proposal. In the present study, epibenthic assemblages of sedimentary habitats have been described, together with the main environmental factor explaining species and communities distribution. Five epibenthic assemblages have been identified. Depth was the main factor explaining assemblage distribution, and the role of sediment type, water masses, and coral framework presence is also discussed. Three assemblages are located in the summit: the shallowest one (730-770 m), in the boundary between Eastern North Atlantic Central Water (ENACW) and Mediterranean Overflow Water (MOW) water masses is typified by ophiuroids and characterized by medium sands. The second assemblage (770-800 m) typified by the bivalve Limopsis minuta and the solitary coral Flabellum chunii correspond with medium sands and MOW core; and the third typified by the presence of cold-water coral communities dominated by Lophelia pertusa and Madrepora oculata, also on the MOW influence. In the border of the summit, in the bank break, an assemblage located in the range 1000-1200 m is dominated by the urchin Cidaris cidaris and the sponge Thenea muricata. In the flat flanks around the bank, the deepest assemblage (1400-1800 m) is dominated by the holothurian Benthogone rosea, in a depth range dominated by the Labrador water (LSW) and in fine sands with highest contents of organic matter. Most of species appeared in a depth range smaller than 25% of total depth range sampled and in < 10% of samples. Differential preference of species is evident in the different trophic guilds, with a higher dominance of filter-feeders in the summit and of deposit-feeders in the deepest assemblage, and have clear links with nutrient dynamics in the bank.
Filter and method of fabricating
Janney, Mark A.
2006-02-14
A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.
Sabharwal, Samir; Fox, Adam D; Vives, Michael J
2018-05-07
Objective To determine the prevalence and variation of inferior vena cava filter (IVCF) use in the spine trauma population and evaluate patient and facility level factors associated with their use. Study Design Retrospective cohort. Participants/Outcome Measures Patients with spinal injuries were identified by ICD-9 codes from the National Trauma Data Bank (NTDB), the best validated national trauma database. Patients whose spine injuries were operatively treated and those who received IVCF were identified from procedure description fields. Additional information compiled included patient demographics, injury severity score (ISS), time until surgery, concomitant fractures, and facility level information. Multivariate logistic regression analyses were conducted to examine the relationship of associated factors for IVCF use. Results Of the 120,920 patients identified with spinal injuries, 2.4% received prophylactic IVCF. Of the 13,273 patients with operatively treated spinal injuries, 8.2% received prophylactic IVCF. Of the 7,770 patients with spinal cord injury (SCI), 10.8% received prophylactic IVCF. The interquartile ranges of placement rates among centers demonstrated greater than 10 fold variation. Based on multivariate logistic regression, ISS score >12 demonstrated the strongest association with prophylactic IVCF (adjusted OR = 4.908). Concomitant pelvic and lower extremity fractures (adj OR 2.573 and 2.522) were also associated with their use. Conclusions Currently the only data regarding existing IVCF use in the spine trauma population amounts to surveys. The present study provides the most detailed and objective information regarding their use in this setting. Even in the operatively treated and SCI subgroups, prophylactic filters were used in only a small percentage of cases but placement rates varied widely among centers. More severely injured patients (ISS >12) had highest odds of receiving prophylactic IVCF. Further study is needed to clarify their role in this vulnerable population.
Investigation on filter method for smoothing spiral phase plate
NASA Astrophysics Data System (ADS)
Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian
2018-03-01
Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.
Lyu, Weiwei
2017-01-01
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592
An Improved Filtering Method for Quantum Color Image in Frequency Domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.
Milani, Ali A; Panahi, Issa M; Briggs, Richard
2007-01-01
Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.
HAZARDOUS SUBSTANCES DATA BANK (HSDB)
Hazardous Substances Data Bank (HSDB) is a factual, non-bibliographic data bank focusing upon the toxicology of potentially hazardous chemicals. It is enhanced with data from such related areas as emergency handling procedures, environmental fate, human exposure, detection method...
Increasing Donations to Supermarket Food-Bank Bins Using Proximal Prompts
ERIC Educational Resources Information Center
Farrimond, Samantha J.; Leland, Louis S., Jr.
2006-01-01
There has been little research into interventions to increase participation in donating items to food-bank bins. In New Zealand, there has been an increased demand from food banks (Stewart, 2002). This study demonstrated that point-of-sale prompts can be an effective method of increasing donations to a supermarket food-bank bin. (Contains 1…
Measurement and Internalization of Systemic Risk in a Global Banking Network
NASA Astrophysics Data System (ADS)
Feng, Xiaobing; Hu, Haibo
2013-12-01
The negative externalities from an individual bank failure to the whole system can be huge. One of the key purposes of bank regulation is to internalize the social costs of potential bank failures via capital charges. This study proposes a method to evaluate and allocate the systemic risk to different countries/regions using a Susceptible-Infected-Removable (SIR) type of epidemic spreading model and the Shapley value (SV) in game theory. The paper also explores features of a constructed bank network using real globe-wide banking data.
Face recognition algorithm based on Gabor wavelet and locality preserving projections
NASA Astrophysics Data System (ADS)
Liu, Xiaojie; Shen, Lin; Fan, Honghui
2017-07-01
In order to solve the effects of illumination changes and differences of personal features on the face recognition rate, this paper presents a new face recognition algorithm based on Gabor wavelet and Locality Preserving Projections (LPP). The problem of the Gabor filter banks with high dimensions was solved effectively, and also the shortcoming of the LPP on the light illumination changes was overcome. Firstly, the features of global image information were achieved, which used the good spatial locality and orientation selectivity of Gabor wavelet filters. Then the dimensions were reduced by utilizing the LPP, which well-preserved the local information of the image. The experimental results shown that this algorithm can effectively extract the features relating to facial expressions, attitude and other information. Besides, it can reduce influence of the illumination changes and the differences in personal features effectively, which improves the face recognition rate to 99.2%.
Complex noise suppression using a sparse representation and 3D filtering of images
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.
2017-08-01
A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.
Filter replacement lifetime prediction
Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.
2017-10-25
Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.
Hepa filter dissolution process
Brewer, Ken N.; Murphy, James A.
1994-01-01
A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
Research on registration algorithm for check seal verification
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen
2008-03-01
Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Holmes, Eleanor; Black, Jennifer L; Heckelman, Amber; Lear, Scott A; Seto, Darlene; Fowokan, Adeleke; Wittman, Hannah
2018-03-01
North American food bank use has risen dramatically since the 1980s, and over 850,000 Canadians were estimated to have visited a food bank monthly in 2015. Food banks serve multiple roles in communities, ranging from 'emergency responses' to individualized and short-term experiences of hunger, to 'chronic' supports as part of long-term subsistence strategies. This study used a mixed-methods design to examine the spectrum of food bank user experiences in a large urban context, as part of a community-based project aiming to envision a redesign of the food bank to contribute to broader community food security outcomes. Survey (n = 77) and focus group (n = 27) results suggested that participants widely viewed food banks as a long-term food-access strategy. Inadequate financial resources, steep increases in housing and food costs, and long-term health challenges emerged as the most prominent factors influencing food bank use. Participants commonly reported unmet food needs despite food bank use, limited agency over factors influencing access to sufficient food, and anticipated requiring food bank services in future. These findings contest global constructions of food banks as "emergency" food providers and support growing evidence that food banks are an insufficient response to chronic poverty, lack of affordable housing and insufficient social assistance rates underlying experiences of food insecurity. Participants envisioned changes to the food bank system to increase community food security including improved food quality and quantity (short-term), changes to service delivery and increased connections with health services (capacity building), and a greater role in poverty reduction advocacy (system redesign). Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike
2018-03-01
Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.
Method for filtering solvent and tar sand mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelterborn, J. C.; Stone, R. A.
1985-09-03
A method for filtering spent tar sands from a bitumen and organic solvent solution comprises separating the solution into two streams wherein the bulk of the coarser spent tar sand is in a first stream and has an average particle size of about 10 to about 100 mesh and the bulk of the finer spent tar sand is in a second stream; producing a filter cake by filtering the coarser spent tar sand from the first stream; and filtering the finer spent tar sand from the second stream with the filter cake. The method is particularly useful for filtering solutionsmore » of bitumen extracted from bitumen containing diatomite, spent diatomite and organic solvent.« less
HEPA filter dissolution process
Brewer, K.N.; Murphy, J.A.
1994-02-22
A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.
An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.
Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G
2018-06-04
Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Method and apparatus for filtering gas with a moving granular filter bed
Brown, Robert C.; Wistrom, Corey; Smeenk, Jerod L.
2007-12-18
A method and apparatus for filtering gas (58) with a moving granular filter bed (48) involves moving a mass of particulate filter material (48) downwardly through a filter compartment (35); tangentially introducing gas into the compartment (54) to move in a cyclonic path downwardly around the moving filter material (48); diverting the cyclonic path (58) to a vertical path (62) to cause the gas to directly interface with the particulate filter material (48); thence causing the gas to move upwardly through the filter material (48) through a screened partition (24, 32) into a static upper compartment (22) of a filter compartment for exodus (56) of the gas which has passed through the particulate filter material (48).
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-01-01
BACKGROUND Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. OBJECTIVES This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. METHODS Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. RESULTS The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. LIMITATIONS The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. CONCLUSIONS Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. FUNDING The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496). PMID:29188764
NASA Astrophysics Data System (ADS)
Sokopp, Manuel
2014-05-01
The embankment stability at navigable waters suffers from hydraulic loads, like strong ship induced waves, resulting hydropeaking and strong water-level fluctuations. Willow brush mattresses can reduce erosion at the embankments of rivers and increase bank stability. Due to experiences gained in the project "Alternative Technical-Biological Bank Protection on Inland Water-ways" the Federal Waterways Engineering and Research Institute commissioned a more detailed investigation of protective functions of willow brush mattresses respectively the differences between brush mattresses made of pure shrub (Salix viminalis) or tree willows (Salix alba) at water ways with high ship-induced hydraulic loads. This paper shows the upcoming research methods of the years 2014 to 2016. The protective functions of two different willow brush mattresses and the congruence between soil, hydraulics and willow sprouts movement will be investigated in a wave basin by measuring flow velocity with ADVs (Acoustic Doppler Velocimeters) installed near the soil surface and in different embankment areas, the pore water pressure with probes in different soil layers, the wave height with ultrasound probes and the willow movements with permanently installed cameras while flooding the basin as well as measuring the erosion afterwards. These flooding test series will be conducted two times during the vegetation period. The shear strength of the tree willow rooted soil will be examined in different soil layers with a shear load frame. The results will be compared with the data of shear strength tests of same aged brush mattresses made of shrub willows, which have already been carried out by the Federal Waterways Engineering and Research Institute. The filtering capability of the soil covering branches and the near surface willow roots will be investigated by growing willow brush mattresses in sample boxes. Those can be repeatedly moved up and down into a diving pool while measuring pore water pressure in different soil layers and flow velocity with ADVs.
1994-03-04
WalerQC METHOD BANK 30104 79-0146 TRHICLOROE1Ifl.BEE(TE) 0.j U11.01 WalerQC UShODSBAIN 301 04W 79-0146 TRIILMOROBHYLBEE (TCE) IU 1101.. alerQC METHOD...OOUL1!ANE -SS 89 %IC WSWeQC METHOD BANK 3020(1400 22M 0-Si-S 2*OOCLOROBUTANE -SI 902 sm WalerQC METHOD BLANK 8020(1400 22M 0-365 1.4003C2LOROSUfANE...SS 920 %wI WmerQC METHMOD BANK 0102(1400 CH 10-56-5 I.OX4-D01OOSUANE -SI IisBc WaNer C METHOD BLANK 8100(1400 22 10-5&5 2.40 EHOROSUTANE -SI 92 IC
DEMONSTRATION BULLETIN: COLLOID POLISHING FILTER METHOD - FILTER FLOW TECHNOLOGY, INC.
The Filter Flow Technology, Inc. (FFT) Colloid Polishing Filter Method (CPFM) was tested as a transportable, trailer mounted, system that uses sorption and chemical complexing phenomena to remove heavy metals and nontritium radionuclides from water. Contaminated waters can be pro...
12 CFR 1101.3 - Organization and methods of operation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Organization and methods of operation. 1101.3..., PROCEDURES, PUBLIC INFORMATION § 1101.3 Organization and methods of operation. (a) Statutory requirements... securing the services, as needed, of staff experts from the five agencies; supervising research and other...
Fout, G. Shay; Cashdollar, Jennifer L.; Varughese, Eunice A.; Parshionikar, Sandhya U.; Grimm, Ann C.
2015-01-01
EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections. PMID:25867928
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
IEEE Conference Record of 1973 Eleventh Modulator Symposium, New York City, 18-19 September 1973.
1973-01-01
characteristics of High-Voltage, observed on am NPNP structure of 550 uN base thickness High- Power , Fast-Switching, Reverse- operating at 1.3 Kv I IA I KA/ us 30...of this which supplies four CFA’s. The odd-numered compact modulator are a shunt clamp regulator power supplies have an additional output going to a...three driver of HVPS filter banks. cabinets. Thus four of the power supplies have a 50% greater load than the other four. However, in AEGIS SPY-1
NASA Technical Reports Server (NTRS)
1981-01-01
Nation's first solar-cell-powered air monitoring station was installed at Liberty State Park, New Jersey. Jointly sponsored by state agencies and the Department of Energy, system includes display which describes its operation to park visitors. Unit samples air every sixth day for a period of 24 hours. Air is forced through a glass filter, then is removed each week for examination by the New Jersey Bureau of Air Pollution. During the day, solar cells provide total power for the sampling equipment. Excess energy is stored in a bank of lead-acid batteries for use when needed.
Reconfigurable Pointing Control for High Resolution Space Spectroscopy
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kia, Tooraj; vanCleve, Jeffrey
1997-01-01
In this paper, a pointing control performance criteria is established to support high resolution space spectroscopy. Results indicate that these pointing requirements are very stringent, and would typically be difficult to meet using standard 3-axis spacecraft control. To resolve this difficulty, it is shown that performance can be significantly improved using a reconfigurable control architecture that switches among a small bank of detuned Kalman filters. The effectiveness of the control reconfiguration approach is demonstrated by example on the Space Infra, Red Telescope Facility (SIRTF) pointing system, in support of the Infrared Spectrograph (IRS) payload.
Study of radar pulse compression for high resolution satellite altimetry
NASA Technical Reports Server (NTRS)
Dooley, R. P.; Nathanson, F. E.; Brooks, L. W.
1974-01-01
Pulse compression techniques are studied which are applicable to a satellite altimeter having a topographic resolution of + 10 cm. A systematic design procedure is used to determine the system parameters. The performance of an optimum, maximum likelihood processor is analysed, which provides the basis for modifying the standard split-gate tracker to achieve improved performance. Bandwidth considerations lead to the recommendation of a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns. The implementation of the recommended technique is examined.
Fast Faraday fading of long range satellite signals.
NASA Technical Reports Server (NTRS)
Heron, M. L.
1972-01-01
20 MHz radio signals have been received during the day from satellite Beacon-B when it was below the optical horizon by using a bank of narrow filters to improve the signal to noise ratio. The Faraday fading rate becomes constant, under these conditions, at a level determined by the plasma frequency just below the F-layer peak. Variations in the Faraday fading rate reveal fluctuations in the electron density near the peak, while the rate of attaining the constant level depends on the shape of the electron density profile.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
A boosted optimal linear learner for retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Poletti, E.; Grisan, E.
2014-03-01
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
Low Noise Titanium Nitride KIDs for SuperSpec: A Millimeter-Wave On-Chip Spectrometer
NASA Astrophysics Data System (ADS)
Hailey-Dunsheath, S.; Shirokoff, E.; Barry, P. S.; Bradford, C. M.; Chapman, S.; Che, G.; Glenn, J.; Hollister, M.; Kovács, A.; LeDuc, H. G.; Mauskopf, P.; McKenney, C.; O'Brient, R.; Padin, S.; Reck, T.; Shiu, C.; Tucker, C. E.; Wheeler, J.; Williamson, R.; Zmuidzinas, J.
2016-07-01
SuperSpec is a novel on-chip spectrometer we are developing for multi-object, moderate resolution (R = 100-500), large bandwidth ({˜ }1.65:1), submillimeter and millimeter survey spectroscopy of high-redshift galaxies. The spectrometer employs a filter bank architecture, and consists of a series of half-wave resonators formed by lithographically-patterned superconducting transmission lines. The signal power admitted by each resonator is detected by a lumped element titanium nitride (TiN) kinetic inductance detector operating at 100-200 MHz. We have tested a new prototype device that achieves the targeted R=100 resolving power, and has better detector sensitivity and optical efficiency than previous devices. We employ a new method for measuring photon noise using both coherent and thermal sources of radiation to cleanly separate the contributions of shot and wave noise. We report an upper limit to the detector NEP of 1.4× 10^{-17} W Hz^{-1/2}, within 10 % of the photon noise-limited NEP for a ground-based R=100 spectrometer.
An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures
NASA Technical Reports Server (NTRS)
Sun, Joy Z.; Josh, Suresh M.
2009-01-01
The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Seong-Heon; Wi, H. M.; Lee, W. R.
2013-08-15
Frequency modulation reflectometer has been developed to measure the plasma density profile of the Korea Superconducting Tokamak Advanced Research tokamak. Three reflectometers are operating in extraordinary polarization mode in the frequency range of Q band (33.6–54 GHz), V band (48–72 GHz), and W band (72–108 GHz) to measure the density up to 7 × 10{sup 19} m{sup −3} when the toroidal magnetic field is 2 T on axis. The antenna is installed inside of the vacuum vessel. A new vacuum window is developed by using 50 μm thick mica film and 0.1 mm thick gold gasket. The filter bank ofmore » low pass filter, notch filter, and Faraday isolator is used to reject the electron cyclotron heating high power at attenuation of 60 dB. The full frequency band is swept in 20 μs. The mixer output is directly digitized with sampling rate of 100 MSamples/s. The phase is obtained by using wavelet transform. The whole hardware and software system is described in detail and the measured density profile is presented as a result.« less
Villa-Parra, Ana Cecilia; Bastos-Filho, Teodiano; López-Delis, Alberto; Frizera-Neto, Anselmo; Krishnan, Sridhar
2017-01-01
This work presents a new on-line adaptive filter, which is based on a similarity analysis between standard electrode locations, in order to reduce artifacts and common interferences throughout electroencephalography (EEG) signals, but preserving the useful information. Standard deviation and Concordance Correlation Coefficient (CCC) between target electrodes and its correspondent neighbor electrodes are analyzed on sliding windows to select those neighbors that are highly correlated. Afterwards, a model based on CCC is applied to provide higher values of weight to those correlated electrodes with lower similarity to the target electrode. The approach was applied to brain computer-interfaces (BCIs) based on Canonical Correlation Analysis (CCA) to recognize 40 targets of steady-state visual evoked potential (SSVEP), providing an accuracy (ACC) of 86.44 ± 2.81%. In addition, also using this approach, features of low frequency were selected in the pre-processing stage of another BCI to recognize gait planning. In this case, the recognition was significantly (p<0.01) improved for most of the subjects (ACC≥74.79%), when compared with other BCIs based on Common Spatial Pattern, Filter Bank-Common Spatial Pattern, and Riemannian Geometry. PMID:29186848
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240
SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.
Youn, Seongwook
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.
Radiation Hard Bandpass Filters for Mid- to Far-IR Planetary Instruments
NASA Technical Reports Server (NTRS)
Brown, Ari D.; Aslam, Shahid; Chervenack, James A.; Huang, Wei-Chung; Merrell, Willie C.; Quijada, Manuel; Steptoe-Jackson, Rosalind; Wollack, Edward J.
2012-01-01
We present a novel method to fabricate compact metal mesh bandpass filters for use in mid- to far-infrared planetary instruments operating in the 20-600 micron wavelength spectral regime. Our target applications include thermal mapping instruments on ESA's JUICE as well as on a de-scoped JEO. These filters are novel because they are compact, customizable, free-standing copper mesh resonant bandpass filters with micromachined silicon support frames. The filters are well suited for thermal mapping mission to the outer planets and their moons because the filter material is radiation hard. Furthermore, the silicon support frame allows for effective hybridization with sensors made on silicon substrates. Using a Fourier Transform Spectrometer, we have demonstrated high transmittance within the passband as well as good out-of-band rejection [1]. In addition, we have developed a unique method of filter stacking in order to increase the bandwidth and sharpen the roll-off of the filters. This method allows one to reliably control the spacing between filters to within 2 microns. Furthermore, our method allows for reliable control over the relative position and orienta-tion between the shared faces of the filters.
SITE TECHNOLOGY CAPSULE: FILTER FLOW TECHNOLOGY, INC. - COLLOID POLISHING FILTER METHOD
The Filter Flow Technology, Inc. (FFT) Coloid Polishing Filter Method (CPFM) was demonstrated at the U.S Department of Energy's (DOE) Rock Flats Plant (RFP) as part of the U.S. Environmental Protection Agency's (EPA) Superfund and Innovative Technology Evaluation (SITE) program. ...
YBCO High-Temperature Superconducting Filters on M-Plane Sapphire Substrates
NASA Technical Reports Server (NTRS)
Sabataitis, J. C.; Mueller, C. H.; Miranda, F. A.; Warner, J.; Bhasin, K. B.
1996-01-01
Since the discovery of High Temperature Superconductors (HTS) in 1986, microwave circuits have been demonstrated using HTS films on various substrates. These HTS-based circuits have proven to operate with less power loss than their metallic film counterparts at 77 K. This translates into smaller and lighter microwave circuits for space communication systems such as multiplexer filter banks. High quality HTS films have conventionally been deposited on lanthanum aluminate (LaAlO3) substrates. However, LaAlO3 has a relative dielectric constant (epsilon(sub r)) of 24. With a epsilon(sub r) approx. 9.4-11.6, sapphire (Al2O3) would be a preferable substrate for the fabrication of HTS-based components since the lower dielectric constant would permit wider microstrip lines to be used in filter design, since the lower dielectric constant would permit wider microstrip lines to be used for a given characteristic impedance (Z(sub 0)), thus lowering the insertion losses and increasing the power handling capabilities of the devices. We report on the fabrication and characterization of YBa2Cu3O(7-delta) (YBCO) on M-plane sapphire bandpass filters at 4.0 GHz. For a YBCO 'hairpin' filter, a minimum insertion loss of 0.5 dB was measured at 77 K as compared with 1.4 dB for its gold counterpart. In an 'edge-coupled' configuration, the insertion loss went down from 0.9 dB for the gold film to 0.8 dB for the YBCO film at the same temperature.
Brown, Carissa D; Liu, Juxin; Yan, Guohua; Johnstone, Jill F
2015-11-01
Disturbance plays a key role in driving ecological responses by creating opportunities for new ecological communities to assemble and by directly influencing the outcomes of assembly. Legacy effects (such as seed banks) and environmental filters can both influence community assembly, but their effects are impossible to separate with observational data. Here, we used seeding experiments in sites covering a broad range of postdisturbance conditions to tease apart the effects of seed availability, environmental factors, and disturbance characteristics on early community assembly after fire. We added seed of four common boreal trees to experimental plots in 55 replicate sites in recently burned areas of black spruce forest in northwestern North America. Seed addition treatments increased the probability of occurrence for all species, indicating a widespread potential for seed limitation to affect patterns of recruitment after fire. Small-seeded. species (aspen and birch) were most sensitive to environmental factors such as soil moisture and organic layer depth, suggesting a role for niche-based environmental filtering in community assembly. Fire characteristics related to severity and frequency were also important drivers of seedling regeneration, indicating the potential for disturbance to mediate environmental filters and legacy effects on seed availability. Because effects of seed availability are typically impossible to disentangle from environmental constraints on recruitment in observational studies, legacy effects contingent on vegetation history may be misinterpreted as being driven by strong environmental filters. Results from the seeding experiments suggest that vegetation legacies affecting seed availability play a pivotal role in shaping patterns of community assembly after fire in these low-diversity boreal forests.
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Jeong, Jinsoo
2011-01-01
This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987
Dense grid sibling frames with linear phase filters
NASA Astrophysics Data System (ADS)
Abdelnour, Farras
2013-09-01
We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.
Three-stage Fabry-Perot liquid crystal tunable filter with extended spectral range.
Zheng, Zhenrong; Yang, Guowei; Li, Haifeng; Liu, Xu
2011-01-31
A method to extend spectral range of tunable optical filter is proposed in this paper. Two same tunable Fabry-Perot filters and an additional tunable filter with different free spectral range are cascaded to extend spectral range and reduce sidelobes. Over 400 nm of free spectral range and 4 nm of full width at half maximum of the filter were achieved. Design procedure and simulation are described in detail. An experimental 3-stage tunable Fabry-Perot filter with visible and infrared spectra is demonstrated. The experimental results and the theoretical analysis are presented in detail to verify this method. The results revealed that a compact and extended tunable spectral range of Fabry-Perot filter can be easily attainable by this method.
An efficient incremental learning mechanism for tracking concept drift in spam filtering
Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi
2017-01-01
This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Application of optical broadband monitoring to quasi-rugate filters by ion-beam sputtering
NASA Astrophysics Data System (ADS)
Lappschies, Marc; Görtz, Björn; Ristau, Detlev
2006-03-01
Methods for the manufacture of rugate filters by the ion-beam-sputtering process are presented. The first approach gives an example of a digitized version of a continuous-layer notch filter. This method allows the comparison of the basic theory of interference coatings containing thin layers with practical results. For the other methods, a movable zone target is employed to fabricate graded and gradual rugate filters. The examples demonstrate the potential of broadband optical monitoring in conjunction with the ion-beam-sputtering process. First-characterization results indicate that these types of filter may exhibit higher laser-induced damage-threshold values than those of classical filters.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
1999-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
Genetic and Physical Interaction of the B-Cell SLE-Associated Genes BANK1 and BLK
Castillejo-López, Casimiro; Delgado-Vega, Angélica M.; Wojcik, Jerome; Kozyrev, Sergey V.; Thavathiru, Elangovan; Wu, Ying-Yu; Sánchez, Elena; Pöllmann, David; López-Egido, Juan R.; Fineschi, Serena; Domínguez, Nicolás; Lu, Rufei; James, Judith A.; Merrill, Joan T.; Kelly, Jennifer A.; Kaufman, Kenneth M.; Moser, Kathy; Gilkeson, Gary; Frostegård, Johan; Pons-Estel, Bernardo A.; D’Alfonso, Sandra; Witte, Torsten; Callejas, José Luis; Harley, John B.; Gaffney, Patrick; Martin, Javier; Guthridge, Joel M.; Alarcón-Riquelme, Marta E.
2012-01-01
Objectives Altered signaling in B-cells is a predominant feature of systemic lupus erythematosus (SLE). The genes BANK1 and BLK were recently described as associated with SLE. BANK1 codes for a B-cell-specific cytoplasmic protein involved in B-cell receptor signaling and BLK codes for an Src tyrosine kinase with important roles in B-cell development. To characterize the role of BANK1 and BLK in SLE, we performed a genetic interaction analysis hypothesizing that genetic interactions could reveal functional pathways relevant to disease pathogenesis. Methods We Used the method GPAT16 to analyze the gene-gene interactions of BANK1 and BLK. Confocal microscopy was used to investigate co-localization, and immunoprecipitation was used to verify the physical interaction of BANK1 and BLK. Results Epistatic interactions between BANK1 and BLK polymorphisms associated with SLE were observed in a discovery set of 279 patients and 515 controls from Northern Europe. A meta-analysis with 4399 European individuals confirmed the genetic interactions between BANK1 and BLK. As BANK1 was identified as a binding partner of the Src tyrosine kinase LYN, we tested the possibility that BANK1 and BLK could also show a protein-protein interaction. We demonstrated co-immunoprecipitation and co-localization of BLK and BANK1. In a Daudi cell line and primary naïve B-cells the endogenous binding was enhanced upon B-cell receptor stimulation using anti-IgM antibodies. Conclusions Here, we show a genetic interaction between BANK1 and BLK, and demonstrate that these molecules interact physically. Our results have important consequences for the understanding of SLE and other autoimmune diseases and identify a potential new signaling pathway. PMID:21978998
Adaptive marginal median filter for colour images.
Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor
2011-01-01
This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.
Comparative Study of Speckle Filtering Methods in PolSAR Radar Images
NASA Astrophysics Data System (ADS)
Boutarfa, S.; Bouchemakh, L.; Smara, Y.
2015-04-01
Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.
Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Super: a web server to rapidly screen superposable oligopeptide fragments from the protein data bank
Collier, James H.; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.
2012-01-01
Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80 500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super. PMID:22638586
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Safety of long-term subcutaneous free flap skin banking after skin-sparing mastectomy.
Verstappen, Ralph; Djedovic, Gabriel; Morandi, Evi Maria; Heiser, Dietmar; Rieger, Ulrich Michael; Bauer, Thomas
2018-03-01
A persistent problem in autologous breast reconstruction in skin-sparing mastectomies is skin restoration after skin necrosis or secondary oncological resection. As a solution to facilitate reconstruction, skin banking of free-flap skin has been proposed in cases where the overlying skin envelope must be resected, as this technique spares the patient an additional donor site. Herein, we present the largest series to date in which this method was used. We investigated its safety and the possibility of skin banking for prolonged periods of time. All skin-sparing mastectomies and immediate autologous breast reconstructions from December 2009 until June 2013 at our institution were analysed. We identified 31 patients who underwent 33 free flap reconstructions in which skin banking was performed. Our median skin banking period was 7 days, with a maximum duration of 171 days. In 22.5% of cases, the banked skin was used to reconstruct overlying skin defects, and in 9.6% of cases to reconstruct the nipple-areolar complex. Microbiological and histological investigations of the banked skin revealed neither clinical infections nor malignancies. In situ skin banking, even for prolonged periods of time, is a safe and cost-effective method to ensure that skin defects due to necrosis or secondary oncological resection can be easily reconstructed.
Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H
2017-02-20
Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... Hydrogen Peroxide Filter Extraction'' In this method, total suspended particulate matter (TSP) is collected on glass fiber filters according to 40 CFR Appendix G to part 50, EPA Reference Method for the Determination of Lead in Suspended Particulate Matter Collected From Ambient Air. The filter samples are...
NASA Astrophysics Data System (ADS)
Gonzalez, Pablo J.
2017-04-01
Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033
Customer Loyalty in Virtual Environments: An Empirical Study in e-Bank
NASA Astrophysics Data System (ADS)
Chao, Yu; Lee, Gin-Yuan; Ho, Yung-Ching
2009-08-01
The advent of e-commerce has increased the importance of consumer financing operations. Internet banking helps banks to develop relationship marketing, thus improve customer loyalty. This study proposes a research framework to examine the relationships among e-service quality, customer satisfaction, customer trust and e-loyalty in e-bank in Taiwan. Data are collected through a survey using a structured questionnaire. The 442 valid respondents who have experience with e-bank are analyzed by partial least squares structural equation modeling (PLS-SEM) method. The managerial implication is e-bank must focus on e-service quality to increase customer satisfaction and trust for obtaining the e-loyalty.
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2013-09-10
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2010-11-16
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Ithapu, Vamsi; Singh, Vikas; Lindner, Christopher; Austin, Benjamin P; Hinrichs, Chris; Carlsson, Cynthia M; Bendlin, Barbara B; Johnson, Sterling C
2014-08-01
Precise detection and quantification of white matter hyperintensities (WMH) observed in T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) is of substantial interest in aging, and age-related neurological disorders such as Alzheimer's disease (AD). This is mainly because WMH may reflect co-morbid neural injury or cerebral vascular disease burden. WMH in the older population may be small, diffuse, and irregular in shape, and sufficiently heterogeneous within and across subjects. Here, we pose hyperintensity detection as a supervised inference problem and adapt two learning models, specifically, Support Vector Machines and Random Forests, for this task. Using texture features engineered by texton filter banks, we provide a suite of effective segmentation methods for this problem. Through extensive evaluations on healthy middle-aged and older adults who vary in AD risk, we show that our methods are reliable and robust in segmenting hyperintense regions. A measure of hyperintensity accumulation, referred to as normalized effective WMH volume, is shown to be associated with dementia in older adults and parental family history in cognitively normal subjects. We provide an open source library for hyperintensity detection and accumulation (interfaced with existing neuroimaging tools), that can be adapted for segmentation problems in other neuroimaging studies. Copyright © 2014 Wiley Periodicals, Inc.
Field Programmable Gate Array Apparatus, Method, and Computer Program
NASA Technical Reports Server (NTRS)
Morfopoulos, Arin C. (Inventor); Pham, Thang D. (Inventor)
2014-01-01
An apparatus is provided that includes a plurality of modules, a plurality of memory banks, and a multiplexor. Each module includes at least one agent that interfaces between a module and a memory bank. Each memory bank includes an arbiter that interfaces between the at least one agent of each module and the memory bank. The multiplexor is configured to assign data paths between the at least one agent of each module and a corresponding arbiter of each memory bank based on the assigned data path. The at least one agent of each module is configured to read data from the corresponding arbiter of the memory bank or write modified data to the corresponding arbiter of the memory bank.
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-11-01
Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496).
Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong
2004-09-01
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Gibbons, C D; Rodríguez, R A; Tallon, L; Sobsey, M D
2010-08-01
To evaluate the electropositive, alumina nanofibre (NanoCeram) cartridge filter as a primary concentration method for recovering adenovirus, norovirus and male-specific coliphages from natural seawater. Viruses were concentrated from 40 l of natural seawater using a NanoCeram cartridge filter and eluted from the filter either by soaking the filter in eluent or by recirculating the eluent continuously through the filter using a peristaltic pump. The elution solution consisted of 3% beef extract and 0.1 mol l(-1) of glycine. The method using a peristaltic pump was more effective in removing the viruses from the filter. High recoveries of norovirus and male-specific coliphages (>96%) but not adenovirus (<3%) were observed from seawater. High adsorption to the filter was observed for adenovirus and male-specific coliphages (>98%). The adsorption and recovery of adenovirus and male-specific coliphages were also determined for fresh finished water and source water. The NanoCeram cartridge filter was an effective primary concentration method for the concentration of norovirus and male-specific coliphages from natural seawater, but not for adenovirus, in spite of the high adsorption of adenovirus to the filter. This study demonstrates that NanoCeram cartridge filter is an effective primary method for concentrating noroviruses and male-specific coliphages from seawater, thereby simplifying collection and processing of water samples for virus recovery.
Method for reducing pressure drop through filters, and filter exhibiting reduced pressure drop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alexander; Wong, Victor
Methods for generating and applying coatings to filters with porous material in order to reduce large pressure drop increases as material accumulates in a filter, as well as the filter exhibiting reduced and/or more uniform pressure drop. The filter can be a diesel particulate trap for removing particulate matter such as soot from the exhaust of a diesel engine. Porous material such as ash is loaded on the surface of the substrate or filter walls, such as by coating, depositing, distributing or layering the porous material along the channel walls of the filter in an amount effective for minimizing ormore » preventing depth filtration during use of the filter. Efficient filtration at acceptable flow rates is achieved.« less
Robust Lane Sensing and Departure Warning under Shadows and Occlusions
Tapia-Espinoza, Rodolfo; Torres-Torriti, Miguel
2013-01-01
A prerequisite for any system that enhances drivers' awareness of road conditions and threatening situations is the correct sensing of the road geometry and the vehicle's relative pose with respect to the lane despite shadows and occlusions. In this paper we propose an approach for lane segmentation and tracking that is robust to varying shadows and occlusions. The approach involves color-based clustering, the use of MSAC for outlier removal and curvature estimation, and also the tracking of lane boundaries. Lane boundaries are modeled as planar curves residing in 3D-space using an inverse perspective mapping, instead of the traditional tracking of lanes in the image space, i.e., the segmented lane boundary points are 3D points in a coordinate frame fixed to the vehicle that have a depth component and belong to a plane tangent to the vehicle's wheels, rather than 2D points in the image space without depth information. The measurement noise and disturbances due to vehicle vibrations are reduced using an extended Kalman filter that involves a 6-DOF motion model for the vehicle, as well as measurements about the road's banking and slope angles. Additional contributions of the paper include: (i) the comparison of textural features obtained from a bank of Gabor filters and from a GMRF model; and (ii) the experimental validation of the quadratic and cubic approximations to the clothoid model for the lane boundaries. The results show that the proposed approach performs better than the traditional gradient-based approach under different levels of difficulty caused by shadows and occlusions. PMID:23478598
Do understorey or overstorey traits drive tree encroachment on a drained raised bog?
Jagodziński, A M; Horodecki, P; Rawlik, K; Dyderski, M K
2017-07-01
One of the most important threats to peatland ecosystems is drainage, resulting in encroachment of woody species. Our main aim was to check which features - overstorey or understorey vegetation - are more important for shaping the seedling bank of pioneer trees colonising peatlands (Pinus sylvestris and Betula pubescens). We hypothesised that tree stand parameters will be more important predictors of natural regeneration density than understorey vegetation parameters, and the former will be negatively correlated with species diversity and richness and also with functional richness and functional dispersion, which indicate a high level of habitat filtering. The study was conducted in the 'Zielone Bagna' nature reserve (NW Poland). We assessed the structure of tree stands and natural regeneration (of B. pubescens and P. sylvestris) and vegetation species composition. Random forest and DCA were applied to assess relationships between variables studied. Understorey vegetation traits affected tree seedling density (up to 0.5-m height) more than tree stand traits. Density of older seedlings depended more on tree stand traits. We did not find statistically significant relationships between natural regeneration densities and functional diversity components, except for functional richness, which was positively correlated with density of the youngest tree seedlings. Seedling densities were higher in plots with lower functional dispersion and functional divergence, which indicated that habitat filtering is more important than competition. Presence of an abundant seedling bank is crucial for the process of woody species encroachment on drained peatlands, thus its dynamics should be monitored in protected areas. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
Discrimination of Nosiheptide Sources with Plasmonic Filters.
Wang, Delong; Ni, Haibin; Wang, Zhongqiang; Liu, Bing; Chen, Hongyuan; Gu, Zhongze; Zhao, Xiangwei
2017-04-19
Bacteria identification plays a vital role in the field of clinical diagnosis, food industry, and environmental monitoring, which is in great demand of point of care detection methods. In this paper, in order to discriminate the source of nosiheptide product, a plasmonic filter was fabricated to filtrate, capture and identify Streptomycete spores with Surface enhanced Raman Scattering (SERS). Since the plasmonic filter was derived from self-assembled photonic crystal coated with silver, the plasmonic "hot spots" on the filter surface was distributed evenly in a fare good density and the SERS enhancement factor was 7.49 × 10 7 . With this filter, a stain- and PCR-free detection was realized with only 5 μL sample solution and 5 min in a manner of "filtration and measure". Comparison to traditional Gram stain method and silver-plated nylon filter membrane, the plasmonic filter showed good sensitivity and efficiency in the discrimination of nosiheptide prepared with chemical and biological methods. It is anticipated that this simple SERS detection method with plasmonic filter has promising potentials in food safety, environmental, or clinical applications.
Switching non-local vector median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2016-04-01
This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.
Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Glass Wool Filters for Concentrating Waterborne Viruses and Agricultural Zoonotic Pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus. PMID:22415031
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... safety deposit box or other safekeeping services, or cash management, custodian, and trust services. (ii... documents, non-documentary methods, or a combination of both methods as described in this paragraph (b)(2... agreement, or trust instrument. (B) Verification through non-documentary methods. For a bank relying on non...
26 CFR 1.585-7 - Elective cut-off method of changing from the reserve method of section 585.
Code of Federal Regulations, 2010 CFR
2010-04-01
... § 1.585-7 Elective cut-off method of changing from the reserve method of section 585. (a) General rule... section, and the bank must include in income any excess balance in this reserve, as required by paragraph...-disqualification loans—(1) In general. A bank that makes the election allowed by paragraph (a) of this section must...
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641
Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Advanced Filter Technology For Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Castillon, Erick
2015-01-01
The Scrubber System focuses on using HEPA filters and carbon filtration to purify the exhaust of a Nuclear Thermal Propulsion engine of its aerosols and radioactive particles; however, new technology may lend itself to alternate filtration options, which may lead to reduction in cost while at the same time have the same filtering, if not greater, filtering capabilities, as its predecessors. Extensive research on various types of filtration methods was conducted with only four showing real promise: ionization, cyclonic separation, classic filtration, and host molecules. With the four methods defined, more research was needed to find the devices suitable for each method. Each filtration option was matched with a device: cyclonic separators for the method of the same name, electrostatic separators for ionization, HEGA filters, and carcerands for the host molecule method. Through many hours of research, the best alternative for aerosol filtration was determined to be the electrostatic precipitator because of its high durability against flow rate and its ability to cleanse up to 99.99% of contaminants as small as 0.001 micron. Carcerands, which are the only alternative to filtering radioactive particles, were found to be non-existent commercially because of their status as a "work in progress" at research institutions. Nevertheless, the conclusions after the research were that HEPA filters is recommended as the best option for filtering aerosols and carbon filtration is best for filtering radioactive particles.
Pinson, Paul A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.
Pinson, P.A.
1998-02-24
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs.
US Public Cord Blood Banking Practices: Recruitment, Donation, and the Timing of Consent
Broder, Sherri; Ponsaran, Roselle; Goldenberg, Aaron
2012-01-01
BACKGROUND Cord blood has moved rapidly from an experimental stem cell source to an accepted and important source of hematopoietic stem cells. There has been no comprehensive assessment of US public cord blood banking practices since the Institute of Medicine study in 2005. STUDY DESIGN AND METHODS Of 34 US public cord blood banks identified, 16 participated in our qualitative survey of public cord blood banking practices. Participants took part in in-depth telephone interviews in which they were asked structured and open-ended questions regarding recruitment, donation, and the informed consent process at these banks. RESULTS 13 of 16 participants reported a variably high percentage of women who consented to public cord blood donation. 15 banks offered donor registration at the time of hospital admission for labor and delivery. 7 obtained full informed consent and medical history during early labor and 8 conducted some form of phased consent and/or phased medical screening and history. 9 participants identified initial selection of the collection site location as the chief mode by which they recruited minority donors. CONCLUSION Since 2005, more public banks offer cord blood donor registration at the time of admission for labor and delivery. That, and the targeted location of cord blood collection sites, are the main methods used to increase access to donation and HLA diversity of banked units. Currently, the ability to collect and process donations, rather than donor willingness, is the major barrier to public cord blood banking. PMID:22803637
Off-line compatible electronic cash method and system
Kravitz, D.W.; Gemmell, P.S.; Brickell, E.F.
1998-11-03
An off-line electronic cash system having an electronic coin, a bank B, a payee S, and a user U with an account at the bank B as well as a user password z{sub u,i}, has a method for performing an electronic cash transfer. An electronic coin is withdrawn from the bank B by the user U and an electronic record of the electronic coin is stored by the bank B. The coin is paid to the payee S by the user U. The payee S deposits the coin with the bank B. A determination is made that the coin is spent and the record of the coin is deleted by the bank B. A further deposit of the same coin after the record is deleted is determined. Additionally, a determination is made which user U originally withdrew the coin after deleting the record. To perform these operations a key pair is generated by the user, including public and secret signature keys. The public signature key along with a user password z{sub u,i} and a withdrawal amount are sent to the bank B by the user U. In response, the bank B sends a coin to the user U signed by the secret key of the bank indicating the value of the coin and the public key of the user U. The payee S transmits a challenge counter to the user U prior to receiving the coin. 16 figs.
Chung, Namho; Kwon, Soon Jae
2009-10-01
While mobile banking has become an integral part of banking activities, it has also caused systems-related stress and consequent distrust among mobile banking users. This study looks into the phenomenon of technology adoption for mobile banking users and identifies potential factors that nurture positive intentions toward mobile banking usage. It examines the effects of a customer's mobile experience and technical support on mobile banking acceptance and explains how some variables affect this intention. After a literature review, the method of empirical analysis using a structured questionnaire is developed. Hierarchical Moderated Regression Analyses (HMRA) is used to examine the model. We find that mobile experience and technical support tend to strengthen the relationship between technological characteristics and a customer's intention to use the mobile technology.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M.L.; Taylor, R.T.
1999-03-30
An improved method is disclosed for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method is presented for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system is also disclosed for in situ field water remediation. 31 figs.
40 CFR 60.386 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... The sample volume for each run shall be at least 1.70 dscm (60 dscf). The sampling probe and filter... probe and filter temperature slightly above the effluent temperature (up to a maximum filter temperature of 121 °C (250 °F)) in order to prevent water condensation on the filter. (2) Method 9 and the...
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Frequency tracking and variable bandwidth for line noise filtering without a reference.
Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei
2011-01-01
This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
Liu, Wanli; Bian, Zhengfu; Liu, Zhenguo; Zhang, Qiuzhao
2015-01-01
Differential interferometric synthetic aperture radar has been shown to be effective for monitoring subsidence in coal mining areas. Phase unwrapping can have a dramatic influence on the monitoring result. In this paper, a filtering-based phase unwrapping algorithm in combination with path-following is introduced to unwrap differential interferograms with high noise in mining areas. It can perform simultaneous noise filtering and phase unwrapping so that the pre-filtering steps can be omitted, thus usually retaining more details and improving the detectable deformation. For the method, the nonlinear measurement model of phase unwrapping is processed using a simplified Cubature Kalman filtering, which is an effective and efficient tool used in many nonlinear fields. Three case studies are designed to evaluate the performance of the method. In Case 1, two tests are designed to evaluate the performance of the method under different factors including the number of multi-looks and path-guiding indexes. The result demonstrates that the unwrapped results are sensitive to the number of multi-looks and that the Fisher Distance is the most suitable path-guiding index for our study. Two case studies are then designed to evaluate the feasibility of the proposed phase unwrapping method based on Cubature Kalman filtering. The results indicate that, compared with the popular Minimum Cost Flow method, the Cubature Kalman filtering-based phase unwrapping can achieve promising results without pre-filtering and is an appropriate method for coal mining areas with high noise. PMID:26153776
Major, Kevin J; Poutous, Menelaos K; Ewing, Kenneth J; Dunnill, Kevin F; Sanghera, Jasbinder S; Aggarwal, Ishwar D
2015-09-01
Optical filter-based chemical sensing techniques provide a new avenue to develop low-cost infrared sensors. These methods utilize multiple infrared optical filters to selectively measure different response functions for various chemicals, dependent on each chemical's infrared absorption. Rather than identifying distinct spectral features, which can then be used to determine the identity of a target chemical, optical filter-based approaches rely on measuring differences in the ensemble response between a given filter set and specific chemicals of interest. Therefore, the results of such methods are highly dependent on the original optical filter choice, which will dictate the selectivity, sensitivity, and stability of any filter-based sensing method. Recently, a method has been developed that utilizes unique detection vector operations defined by optical multifilter responses, to discriminate between volatile chemical vapors. This method, comparative-discrimination spectral detection (CDSD), is a technique which employs broadband optical filters to selectively discriminate between chemicals with highly overlapping infrared absorption spectra. CDSD has been shown to correctly distinguish between similar chemicals in the carbon-hydrogen stretch region of the infrared absorption spectra from 2800-3100 cm(-1). A key challenge to this approach is how to determine which optical filter sets should be utilized to achieve the greatest discrimination between target chemicals. Previous studies used empirical approaches to select the optical filter set; however this is insufficient to determine the optimum selectivity between strongly overlapping chemical spectra. Here we present a numerical approach to systematically study the effects of filter positioning and bandwidth on a number of three-chemical systems. We describe how both the filter properties, as well as the chemicals in each set, affect the CDSD results and subsequent discrimination. These results demonstrate the importance of choosing the proper filter set and chemicals for comparative discrimination, in order to identify the target chemical of interest in the presence of closely matched chemical interferents. These findings are an integral step in the development of experimental prototype sensors, which will utilize CDSD.
Eye-bank preparation of endothelial tissue.
Boynton, Grace E; Woodward, Maria A
2014-07-01
Eye-bank preparation of endothelial tissue for keratoplasty continues to evolve. Although eye-bank personnel have become comfortable and competent at Descemet's stripping automated endothelial keratoplasty (DSAEK), tissue preparation and tissue transport, optimization of preparation methods continues. Surgeons and eye-bank personnel should be up to date on the research in the field. As surgeons transit to Descemet's membrane endothelial keratoplasty (DMEK), eye banks have risen to the challenge of preparing tissue. Eye banks are refining their DMEK preparation and transport techniques. This article covers refinements to DSAEK tissue preparation, innovations to prepare DMEK tissue, and nuances to improve donor cornea tissue quality. As eye bank-supplied corneal tissue is the main source of tissue for many corneal surgeons, it is critical to stay informed about tissue handling and preparation. Ultimately, the surgeon is responsible for the transplantation, so involvement of clinicians in eye-banking practices and advocacy for pursuing meaningful research in this area will benefit clinical patient outcomes.
Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia
2017-04-01
It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.
Global health policies that support the use of banked donor human milk: a human rights issue
Arnold, Lois DW
2006-01-01
This review examines the role of donor human milk banking in international human rights documents and global health policies. For countries looking to improve child health, promotion, protection and support of donor human milk banks has an important role to play for the most vulnerable of infants and children. This review is based on qualitative triangulation research conducted for a doctoral dissertation. The three methods used in triangulation were 1) writing as a method of inquiry, 2) an integrative research review, and 3) personal experience and knowledge of the topic. Discussion of the international human rights documents and global health policies shows that there is a wealth of documentation to support promotion, protection and support of donor milk banking as an integral part of child health and survival. By utilizing these policy documents, health ministries, professional associations, and donor milk banking associations can find rationales for establishing, increasing or continuing to provide milk banking services in any country, and thereby improve the health of children and future generations of adults. PMID:17164001
EMDataBank unified data resource for 3DEM.
Lawson, Catherine L; Patwardhan, Ardan; Baker, Matthew L; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P; Lagerstedt, Ingvar; Ludtke, Steven J; Pintilie, Grigore; Sala, Raul; Westbrook, John D; Berman, Helen M; Kleywegt, Gerard J; Chiu, Wah
2016-01-04
Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Fan beam image reconstruction with generalized Fourier slice theorem.
Zhao, Shuangren; Yang, Kang; Yang, Kevin
2014-01-01
For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N
Arrays of Carbon Nanotubes as RF Filters in Waveguides
NASA Technical Reports Server (NTRS)
Hoppe, Daniel; Hunt, Brian; Hoenk, Michael; Noca, Flavio; Xu, Jimmy
2003-01-01
Brushlike arrays of carbon nanotubes embedded in microstrip waveguides provide highly efficient (high-Q) mechanical resonators that will enable ultraminiature radio-frequency (RF) integrated circuits. In its basic form, this invention is an RF filter based on a carbon nanotube array embedded in a microstrip (or coplanar) waveguide, as shown in Figure 1. In addition, arrays of these nanotube-based RF filters can be used as an RF filter bank. Applications of this new nanotube array device include a variety of communications and signal-processing technologies. High-Q resonators are essential for stable, low-noise communications, and radar applications. Mechanical oscillators can exhibit orders of magnitude higher Qs than electronic resonant circuits, which are limited by resistive losses. This has motivated the development of a variety of mechanical resonators, including bulk acoustic wave (BAW) resonators, surface acoustic wave (SAW) resonators, and Si and SiC micromachined resonators (known as microelectromechanical systems or MEMS). There is also a strong push to extend the resonant frequencies of these oscillators into the GHz regime of state-of-the-art electronics. Unfortunately, the BAW and SAW devices tend to be large and are not easily integrated into electronic circuits. MEMS structures have been integrated into circuits, but efforts to extend MEMS resonant frequencies into the GHz regime have been difficult because of scaling problems with the capacitively-coupled drive and readout. In contrast, the proposed devices would be much smaller and hence could be more readily incorporated into advanced RF (more specifically, microwave) integrated circuits.
Truong, D D; Austin, M E
2014-11-01
The 40-channel DIII-D electron cyclotron emission (ECE) radiometer provides measurements of Te(r,t) at the tokamak midplane from optically thick, second harmonic X-mode emission over a frequency range of 83-130 GHz. The frequency spacing of the radiometer's channels results in a spatial resolution of ∼1-3 cm, depending on local magnetic field and electron temperature. A new high resolution subsystem has been added to the DIII-D ECE radiometer to make sub-centimeter (0.6-0.8 cm) resolution Te measurements. The high resolution subsystem branches off from the regular channels' IF bands and consists of a microwave switch to toggle between IF bands, a switched filter bank for frequency selectivity, an adjustable local oscillator and mixer for further frequency down-conversion, and a set of eight microwave filters in the 2-4 GHz range. Higher spatial resolution is achieved through the use of a narrower (200 MHz) filter bandwidth and closer spacing between the filters' center frequencies (250 MHz). This configuration allows for full coverage of the 83-130 GHz frequency range in 2 GHz bands. Depending on the local magnetic field, this translates into a "zoomed-in" analysis of a ∼2-4 cm radial region. Expected uses of these channels include mapping the spatial dependence of Alfven eigenmodes, geodesic acoustic modes, and externally applied magnetic perturbations. Initial Te measurements, which demonstrate that the desired resolution is achieved, are presented.
Cytokines, Chemokines, and Growth Factors in Banked Human Donor Milk for Preterm Infants
Groer, Maureen; Duffy, Allyson; Morse, Shannon; Kane, Bradley; Zaritt, Judy; Roberts, Shari; Ashmeade, Terri
2014-01-01
Background There has been a recent increase in availability of banked donor milk for feeding of preterm infants. This milk is pooled from donations to milk banks from carefully screened lactating women. The milk is then pasteurized by the Holder method to remove all microbes. The processed milk is frozen, banked, and sold to neonatal intensive care units (NICUs). The nutrient bioavailability of banked donor milk has been described, but little is known about preservation of immune components such as cytokines, chemokines, and growth factors (CCGF). Objective The objective was to compare CCGF in banked donor milk with mother's own milk (MOM). Methods Aliquots (0.5 mL) were collected daily from MOM pumped by 45 mothers of NICU-admitted infants weighing < 1500 grams at birth. All daily aliquots of each mother's milk were pooled each week during 6 weeks of an infant's NICU stay or for as long as the mother provided MOM. The weekly pooled milk was measured for a panel of CCGF through multiplexing using magnetic beads and a MAGPIX instrument. Banked donor milk samples (n = 25) were handled and measured in the same way as MOM. Results Multiplex analysis revealed that there were levels of CCGF in banked donor milk samples comparable to values obtained from MOM after 6 weeks of lactation. Conclusion These data suggest that many important CCGF are not destroyed by Holder pasteurization. PMID:24663954
Fit-for-purpose phosphorus management: do riparian buffers qualify in catchments with sandy soils?
Weaver, David; Summers, Robert
2014-05-01
Hillslope runoff and leaching studies, catchment-scale water quality measurements and P retention and release characteristics of stream bank and catchment soils were used to better understand reasons behind the reported ineffectiveness of riparian buffers for phosphorus (P) management in catchments with sandy soils from south-west Western Australia (WA). Catchment-scale water quality measurements of 60 % particulate P (PP) suggest that riparian buffers should improve water quality; however, runoff and leaching studies show 20 times more water and 2 to 3 orders of magnitude more P are transported through leaching than runoff processes. The ratio of filterable reactive P (FRP) to total P (TP) in surface runoff from the plots was 60 %, and when combined with leachate, 96 to 99 % of P lost from hillslopes was FRP, in contrast with 40 % measured as FRP at the large catchment scale. Measurements of the P retention and release characteristics of catchment soils (<2 mm) compared with stream bank soil (<2 mm) and the <75-μm fraction of stream bank soils suggest that catchment soils contain more P, are more P saturated and are significantly more likely to deliver FRP and TP in excess of water quality targets than stream bank soils. Stream bank soils are much more likely to retain P than contribute P to streams, and the in-stream mixing of FRP from the landscape with particulates from stream banks or stream beds is a potential mechanism to explain the change in P form from hillslopes (96 to 99 % FRP) to large catchments (40 % FRP). When considered in the context of previous work reporting that riparian buffers were ineffective for P management in this environment, these studies reinforce the notion that (1) riparian buffers are unlikely to provide fit-for-purpose P management in catchments with sandy soils, (2) most P delivered to streams in sandy soil catchments is FRP and travels via subsurface and leaching pathways and (3) large catchment-scale water quality measurements are not good indicators of hillslope P mobilisation and transport processes.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Jia, Ziru; Liu, Hongying; Li, Wang; Xie, Dandan; Cheng, Ke; Pi, Xitian
2018-02-01
In recent years, noninvasive diagnosis based on biomarkers in exhaled breath has been extensively studied. The procedure of biomarker collection is a key step. However, the traditional condenser method has low efficacy in collecting nonvolatile compounds especially the protein biomarkers in breath. To solve this deficiency, here we propose an electret filter method.Exhaled breath of 6 volunteers was collected with a glass condenser and an electret filter. The amount of albumin was analyzed. Furthermore, the difference of exhaled albumin between smokers and nonsmokers was evaluated.The electret filter method collected more albumin than the glass condenser method at the same breath volume level (P < .01). Smokers exhaling more albumin than nonsmokers were also observed (P < .01).The electret filter is capable of collecting proteins more effectively than the condenser method. In addition, smokers tend to exhale more albumin than nonsmokers.
Method of recovering hazardous waste from phenolic resin filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meikrantz, D.H.; Bourne, G.L.; McFee, J.N.
1990-12-31
A method has been found for treating phenolic resin filter, whereby the filter is solubilized within the filter cartridge housing so the filter material can be removed from the cartridge housing in a remote manner. The invention consists of contacting the filter within the housing with an aqueous solution of about 8 to 12M nitric acid, at a temperature from about 110 to 190{degree}F, maintaining the contact for a period of time sufficient to solubilize the phenolic material within the housing, and removing the solubilized phenolic material from the housing, thereby removing the filter cartridge from the housing. Any hazardousmore » or other waste material can then be separated from the filter material by chemical or other means.« less
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Method and apparatus for PM filter regeneration
Opris, Cornelius N [Peoria, IL; Verkiel, Maarten [Metamora, IL
2006-01-03
A method and apparatus for initiating regeneration of a particulate matter (PM) filter in an exhaust system in an internal combustion engine. The method and apparatus includes determining a change in pressure of exhaust gases passing through the PM filter, and responsively varying an opening of an intake valve in fluid communication with a combustion chamber.
Filter desulfation system and method
Lowe, Michael D.; Robel, Wade J.; Verkiel, Maarten; Driscoll, James J.
2010-08-10
A method of removing sulfur from a filter system of an engine includes continuously passing an exhaust flow through a desulfation leg of the filter system during desulfation. The method also includes sensing at least one characteristic of the exhaust flow and modifying a flow rate of the exhaust flow during desulfation in response to the sensing.
Wave basin model tests of technical-biological bank protection
NASA Astrophysics Data System (ADS)
Eisenmann, J.
2012-04-01
Sloped embankments of inland waterways are usually protected from erosion and other negative im-pacts of ship-induced hydraulic loads by technical revetments consisting of riprap. Concerning the dimensioning of such bank protection there are several design rules available, e.g. the "Principles for the Design of Bank and Bottom Protection for Inland Waterways" or the Code of Practice "Use of Standard Construction Methods for Bank and Bottom Protection on Waterways" issued by the BAW (Federal Waterways Engineering and Research Institute). Since the European Water Framework Directive has been put into action special emphasis was put on natural banks. Therefore the application of technical-biological bank protection is favoured. Currently design principles for technical-biological bank protection on inland waterways are missing. The existing experiences mainly refer to flowing waters with no or low ship-induced hydraulic loads on the banks. Since 2004 the Federal Waterways Engineering and Research Institute has been tracking the re-search and development project "Alternative Technical-Biological Bank Protection on Inland Water-ways" in company with the Federal Institute of Hydrology. The investigation to date includes the ex-amination of waterway sections where technical- biological bank protection is applied locally. For the development of design rules for technical-biological bank protection investigations shall be carried out in a next step, considering the mechanics and resilience of technical-biological bank protection with special attention to ship-induced hydraulic loads. The presentation gives a short introduction into hydraulic loads at inland waterways and their bank protection. More in detail model tests of a willow brush mattress as a technical-biological bank protec-tion in a wave basin are explained. Within the scope of these tests the brush mattresses were ex-posed to wave impacts to determine their resilience towards hydraulic loads. Since the developing pore water pressure is significant considering the slope stability under hydraulic load, particular atten-tion is paid to the interaction of willow roots and pore water pressure. Furthermore the occurring ero-sion is determined. The methods of measurements, test conditions and executions as well as first results will be presented.
Integrating the ECG power-line interference removal methods with rule-based system.
Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N
1995-01-01
The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.
Borzilov, V A
1993-11-01
Development of requirements for a data bank for natural media as a system of intercorrelated parameters to estimate system states are determined. The problems of functional agreement between experimental and calculation methods are analysed when organizing the ecological monitoring. The methods of forming the environmental specimen bank to estimate and forecast radioactive contamination and exposure dose are considered to be exemplified by the peculiarities of the spatial distribution of radioactive contamination in fields. Analysed is the temporal dynamics of contamination for atmospheric air, soil and water.
Wave-filter-based approach for generation of a quiet space in a rectangular cavity
NASA Astrophysics Data System (ADS)
Iwamoto, Hiroyuki; Tanaka, Nobuo; Sanada, Akira
2018-02-01
This paper is concerned with the generation of a quiet space in a rectangular cavity using active wave control methodology. It is the purpose of this paper to present the wave filtering method for a rectangular cavity using multiple microphones and its application to an adaptive feedforward control system. Firstly, the transfer matrix method is introduced for describing the wave dynamics of the sound field, and then feedforward control laws for eliminating transmitted waves is derived. Furthermore, some numerical simulations are conducted that show the best possible result of active wave control. This is followed by the derivation of the wave filtering equations that indicates the structure of the wave filter. It is clarified that the wave filter consists of three portions; modal group filter, rearrangement filter and wave decomposition filter. Next, from a numerical point of view, the accuracy of the wave decomposition filter which is expressed as a function of frequency is investigated using condition numbers. Finally, an experiment on the adaptive feedforward control system using the wave filter is carried out, demonstrating that a quiet space is generated in the target space by the proposed method.
Correlation Filter Learning Toward Peak Strength for Visual Tracking.
Sui, Yao; Wang, Guanghui; Zhang, Li
2018-04-01
This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.
Scientific and Engineering Studies. Compiled 1979. Coherence Estimation,
1979-01-01
leakage taking place (reference (f)). This is due to the non- cal bandpass characteristics of the FFT’s when consideredasa bank of filters. When an NN?4...Carter, G. C., "r..stinntion of the ’lagnitude Squared Coherence Function (Spectrum)," liniversity of Connecti- cut Ilaster’s Thesis (to he published... Thesis Verut.. of FIT 400" vral zwas f~1tprorm . . ga. of~ 112 Whoa I -f 12. 0. ad a a32 7 29. Vaulaneo 1^12 WbnI17I120.0 and 0. 32. .77 30. astI When
VizieR Online Data Catalog: 42 millisecond pulsars high-precision timing (Desvignes+, 2016)
NASA Astrophysics Data System (ADS)
Desvignes, G.; Caballero, R. N.; Lentati, L.; Verbiest, J. P. W.; Champion, D. J.; Stappers, B. W.; Janssen, G. H.; Lazarus, P.; Oslowski, S.; Babak, S.; Bassa, C. G.; Brem, P.; Burgay, M.; Cognard, I.; Gair, J. R.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Jessner, A.; Jordan, C.; Karuppusamy, R.; Kramer, M.; Lassus, A.; Lazaridis, K.; Lee, K. J.; Liu, K.; Lyne, A. G.; McKee, J.; Mingarelli, C. M. F.; Perrodin, D.; Petiteau, A.; Possenti, A.; Purver, M. B.; Rosado, P. A.; Sanidas, S.; Sesana, A.; Shaifullah, G.; Smits, R.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; van Haasteren, R.; Vecchio, A.
2017-02-01
This paper presents the EPTA data set, up to mid-2014, that was gathered from the 'historical' pulsar instrumentations at EFF, JBO, NRT and WSRT with, respectively, the EBPP (Effelsberg-Berkeley Pulsar Processor), DFB (Digital FilterBank), BON (Berkeley-Orleans-Nancay) and PuMa (Pulsar Machine) backends. The data recorded with the newest generation of instrumentations, e.g. PSRIX at EFF (Lazarus et al., 2016MNRAS.458..868L) and PuMaII at WSRT (Karuppusamy, Stappers & van Straten 2008PASP..120..191K), will be part of a future EPTA data release. (8 data files).
Dadisman, Shawn V.; Ryan, Holly F.; Mann, Dennis M.
1987-01-01
During 1984, over 2300 km of multichannel seismic-reflection data were recorded by the U.S. Geological Survey in the western Ross Sea and Iselin Bank regions. A temporary loss and sinking of the streamer led to increasing the streamer tow depth to 20 m, which resulted in some attenuation of frequencies in the 30-50 Hz range but no significant difference in resolution of the stacked data. Severe water bottom multiples were encountered and removed by dip-filtering, weighted stacking, and severe post-NMO muting.
Wavelet transforms with discrete-time continuous-dilation wavelets
NASA Astrophysics Data System (ADS)
Zhao, Wei; Rao, Raghuveer M.
1999-03-01
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
CTEPP STANDARD OPERATING PROCEDURE FOR PRE-CLEANING FILTERS AND XAD-2 (SOP-5.10)
This SOP summarizes the method for pre-cleaning XAD-2 resin and quartz fiber filters. The procedure provides a cleaning method to help reduce potential background contamination in the resin and filters.
In situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
2000-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen
2015-01-01
In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.
Brian hears: online auditory processing using vectorization over channels.
Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
NASA Technical Reports Server (NTRS)
Isobe, Shunkichi; Ohmori, Shingo; Hamamoto, Naokazu; Yamamoto, Minoru
1991-01-01
Communications Research Laboratory (CRL) studied an advanced mobile satellite communications system using Ka and millimeter-wave bands in the R&D Satellite project. The project started in 1990 and the satellite will be launched in 1997. On-board multi-beam interconnecting is one of basic functions to realize one-hop connection among Very Small Aperture Terminals (VSATs), mobile, and hand-held terminals in future mobile satellite communications system. An Intermediate Frequency (IF) filter bank and regenerative transponder are suitable for this function. The transponder configuration of an advanced mobile communications mission of the R&D Satellite for experiment is shown. High power transmitters of Ka and millimeter-wave bands, a 3x3 IF filter band and Single Channel Per Carrier/Time Division Multiplexing (SCPC/TDM) regenerative MODEMS, which will be boarded on the R&D Satellite, are being developed for the purpose of studying the feasibility of advanced mobile communications system.
Rengasamy, Samy; Eimer, Benjamin C
2012-01-01
National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.
Topics in the Detection of Gravitational Waves from Compact Binary Inspirals
NASA Astrophysics Data System (ADS)
Kapadia, Shasvath Jagat
Orbiting compact binaries - such as binary black holes, binary neutron stars and neutron star-black hole binaries - are among the most promising sources of gravitational waves observable by ground-based interferometric detectors. Despite numerous sophisticated engineering techniques, the gravitational wave signals will be buried deep within noise generated by various instrumental and environmental processes, and need to be extracted via a signal processing technique referred to as matched filtering. Matched filtering requires large banks of signal templates that are faithful representations of the true gravitational waveforms produced by astrophysical binaries. The accurate and efficient production of templates is thus crucial to the success of signal processing and data analysis. To that end, the dissertation presents a numerical technique that calibrates existing analytical (Post-Newtonian) waveforms, which are relatively inexpensive, to more accurate fiducial waveforms that are computationally expensive to generate. The resulting waveform family is significantly more accurate than the analytical waveforms, without incurring additional computational costs of production. Certain kinds of transient background noise artefacts, called "glitches'', can masquerade as gravitational wave signals for short durations and throw-off the matched-filter algorithm. Identifying glitches from true gravitational wave signals is a highly non-trivial exercise in data analysis which has been attempted with varying degrees of success. We present here a machine-learning based approach that exploits the various attributes of glitches and signals within detector data to provide a classification scheme that is a significant improvement over previous methods. The dissertation concludes by investigating the possibility of detecting a non-linear DC imprint, called the Christodoulou memory, produced in the arms of ground-based interferometers by the recently detected gravitational waves. The memory, which is even smaller in amplitude than the primary (detected) gravitational waves, will almost certainly not be seen in the current detection event. Nevertheless, future space-based detectors will likely be sensitive enough to observe the memory.
Design of a composite filter realizable on practical spatial light modulators
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Ramakrishnan, Ramachandran
1994-01-01
Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.
Advanced hybrid particulate collector and method of operation
Miller, Stanley J.
1999-01-01
A device and method for controlling particulate air pollutants of the present invention combines filtration and electrostatic collection devices. The invention includes a chamber housing a plurality of rows of filter elements. Between each row of filter elements is a grounded plate. Between the grounded plates and the filter elements are electrode grids for creating electrostatic precipitation zones between each row of filter elements. In this way, when the filter elements are cleaned by pulsing air in a reverse direction, the dust removed from the bags will collect in the electrostatic precipitation zones rather than on adjacent filter elements.
Optical calculation of correlation filters for a robotic vision system
NASA Technical Reports Server (NTRS)
Knopp, Jerome
1989-01-01
A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.
Advanced hybrid particulate collector and method of operation
Miller, S.J.
1999-08-17
A device and method for controlling particulate air pollutants of the present invention combines filtration and electrostatic collection devices. The invention includes a chamber housing a plurality of rows of filter elements. Between each row of filter elements is a grounded plate. Between the grounded plates and the filter elements are electrode grids for creating electrostatic precipitation zones between each row of filter elements. In this way, when the filter elements are cleaned by pulsing air in a reverse direction, the dust removed from the bags will collect in the electrostatic precipitation zones rather than on adjacent filter elements. 12 figs.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2016-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Shang, Zhen; Sui, Yun-Kang
2012-12-01
Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.
Method of securing filter elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Erik P.; Haslam, Jeffery L.; Mitchell, Mark A.
2016-10-04
A filter securing system including a filter unit body housing; at least one tubular filter element positioned in the filter unit body housing, the tubular filter element having a closed top and an open bottom; a dimple in either the filter unit body housing or the top of the tubular filter element; and a socket in either the filter unit body housing or the top of the tubular filter element that receives the dimple in either the filter unit body housing or the top of the tubular filter element to secure the tubular filter element to the filter unit bodymore » housing.« less
NASA Astrophysics Data System (ADS)
Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.
2016-06-01
This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.
FogBank: a single cell segmentation across multiple cell lines and image modalities.
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary
2014-12-30
Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.
Bank gully extraction from DEMs utilizing the geomorphologic features of a loess hilly area in China
NASA Astrophysics Data System (ADS)
Yang, Xin; Na, Jiaming; Tang, Guoan; Wang, Tingting; Zhu, Axing
2018-04-01
As one of most active gully types in the Chinese Loess Plateau, bank gullies generally indicate soil loss and land degradation. This study addressed the lack of detailed, large scale monitoring of bank gullies and proposed a semi-automatic method for extracting bank gullies, given typical topographic features based on 5 m resolution DEMs. First, channel networks, including bank gullies, are extracted through an iterative channel burn-in algorithm. Second, gully heads are correctly positioned based on the spatial relationship between gully heads and their corresponding gully shoulder lines. Third, bank gullies are distinguished from other gullies using the newly proposed topographic measurement of "relative gully depth (RGD)." The experimental results from the loess hilly area of the Linjiajian watershed in the Chinese Loess Plateau show that the producer accuracy reaches 87.5%. The accuracy is affected by the DEM resolution and RGD parameters, as well as the accuracy of the gully shoulder line. The application in the Madigou watershed with a high DEM resolution validated the duplicability of this method in other areas. The overall performance shows that bank gullies can be extracted with acceptable accuracy over a large area, which provides essential information for research on soil erosion, geomorphology, and environmental ecology.
Takahashi, Hiro; Nemoto, Takeshi; Yoshida, Teruhiko; Honda, Hiroyuki; Hasegawa, Tadashi
2006-01-01
Background Recent advances in genome technologies have provided an excellent opportunity to determine the complete biological characteristics of neoplastic tissues, resulting in improved diagnosis and selection of treatment. To accomplish this objective, it is important to establish a sophisticated algorithm that can deal with large quantities of data such as gene expression profiles obtained by DNA microarray analysis. Results Previously, we developed the projective adaptive resonance theory (PART) filtering method as a gene filtering method. This is one of the clustering methods that can select specific genes for each subtype. In this study, we applied the PART filtering method to analyze microarray data that were obtained from soft tissue sarcoma (STS) patients for the extraction of subtype-specific genes. The performance of the filtering method was evaluated by comparison with other widely used methods, such as signal-to-noise, significance analysis of microarrays, and nearest shrunken centroids. In addition, various combinations of filtering and modeling methods were used to extract essential subtype-specific genes. The combination of the PART filtering method and boosting – the PART-BFCS method – showed the highest accuracy. Seven genes among the 15 genes that are frequently selected by this method – MIF, CYFIP2, HSPCB, TIMP3, LDHA, ABR, and RGS3 – are known prognostic marker genes for other tumors. These genes are candidate marker genes for the diagnosis of STS. Correlation analysis was performed to extract marker genes that were not selected by PART-BFCS. Sixteen genes among those extracted are also known prognostic marker genes for other tumors, and they could be candidate marker genes for the diagnosis of STS. Conclusion The procedure that consisted of two steps, such as the PART-BFCS and the correlation analysis, was proposed. The results suggest that novel diagnostic and therapeutic targets for STS can be extracted by a procedure that includes the PART filtering method. PMID:16948864
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
26 CFR 1.585-5 - Denial of bad debt reserves for large banks.
Code of Federal Regulations, 2010 CFR
2010-04-01
....585-5 Section 1.585-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Banking Institutions § 1.585-5 Denial of bad debt...)) with respect to its banking business is the specific charge-off method. In applying § 1.381(c)(4)-1(c...
BMP FILTERS: UPFLOW VS. DOWNFLOW
Filtration methods have been found to be effective in reducing pollutant levels in stormwater. The main drawback of these methods is that the filters get clogged frequently and require periodical maintenance. In stormwater treatment, because of the cost of pumping, the filters ar...
Improving Soil Seed Bank Management.
Haring, Steven C; Flessner, Michael L
2018-05-08
Problems associated with simplified weed management motivate efforts for diversification. Integrated weed management uses fundamentals of weed biology and applied ecology to provide a framework for diversified weed management programs; the soil seed bank comprises a necessary part of this framework. By targeting seeds, growers can inhibit the propagule pressure on which annual weeds depend for agricultural invasion. Some current management practices affect weed seed banks, such as crop rotation and tillage, but these tools are often used without specific intention to manage weed seeds. Difficulties quantifying the weed seed bank, understanding seed bank phenology, and linking seed banks to emerged weed communities challenge existing soil seed bank management practices. Improved seed bank quantification methods could include DNA profiling of the soil seed bank, mark and recapture, or 3D LIDAR mapping. Successful and sustainable soil seed bank management must constrain functionally diverse and changing weed communities. Harvest weed seed controls represent a step forward, but over-reliance on this singular technique could make it short-lived. Researchers must explore tools inspired by other pest management disciplines, such as gene drives or habitat modification for predatory organisms. Future weed seed bank management will combine multiple complementary practices that enhance diverse agroecosystems. This article is protected by copyright. All rights reserved.
Analysis of Time Filters in Multistep Methods
NASA Astrophysics Data System (ADS)
Hurl, Nicholas
Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.
Balanced multiwavelets with interpolatory property.
Li, Baobin; Peng, Lizhong
2011-05-01
Balanced multiwavelets with interpolatory property are discussed in this paper. This kind of multiwavelets can have a sampling property like Shannon's sampling theorem. It has been shown that the corresponding matrix-valued refinable mask has special structure, and an orthogonal multifilter bank {H(z),G(z)} can be reduced to a scalar valued conjugate quadrature filter (CQF) a(z) . But it does not mean that any scalar CQF can form a "good" multifilter bank which can generate a vector-valued refinable function with some degree of smoothness. In the context of balanced multiwavelets, we give the definition of transferring balance order, which a scalar CQF a(z) satisfies, to guarantee that the multiwavelet Ψ generated is balanced. On the basis of the parametrization of a scalar CQF with any length and conditions of transferring balance order, parametrization of multifilter banks which can generate interpolatory multiwavelet and interpolatory scaling function, is gotten. Moreover, some balanced interpolatory multiwavelets have been constructed. Interpolatory analysis-ready multiwavelets (armlets) are also discussed in this paper. It is known that conditions of armlets are easy to validate, compared with balanced multiwavelets. But it will be present that if the corresponding scaling function Φ is interpolatory, the multiwavelet Ψ is balanced of order n if and only if it is an armlet of order n. Finally, the application of balanced multiwavelets with interpolatory property in image processing is also discussed.