Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
An Automated Energy Detection Algorithm Based on Consecutive Mean Excision
2018-01-01
present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan
Two-Dimensional Hermite Filters Simplify the Description of High-Order Statistics of Natural Images.
Hu, Qin; Victor, Jonathan D
2016-09-01
Natural image statistics play a crucial role in shaping biological visual systems, understanding their function and design principles, and designing effective computer-vision algorithms. High-order statistics are critical for conveying local features, but they are challenging to study - largely because their number and variety is large. Here, via the use of two-dimensional Hermite (TDH) functions, we identify a covert symmetry in high-order statistics of natural images that simplifies this task. This emerges from the structure of TDH functions, which are an orthogonal set of functions that are organized into a hierarchy of ranks. Specifically, we find that the shape (skewness and kurtosis) of the distribution of filter coefficients depends only on the projection of the function onto a 1-dimensional subspace specific to each rank. The characterization of natural image statistics provided by TDH filter coefficients reflects both their phase and amplitude structure, and we suggest an intuitive interpretation for the special subspace within each rank.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
NASA Astrophysics Data System (ADS)
Buzzicotti, M.; Linkmann, M.; Aluie, H.; Biferale, L.; Brasseur, J.; Meneveau, C.
2018-02-01
The effects of different filtering strategies on the statistical properties of the resolved-to-subfilter scale (SFS) energy transfer are analysed in forced homogeneous and isotropic turbulence. We carry out a-priori analyses of the statistical characteristics of SFS energy transfer by filtering data obtained from direct numerical simulations with up to 20483 grid points as a function of the filter cutoff scale. In order to quantify the dependence of extreme events and anomalous scaling on the filter, we compare a sharp Fourier Galerkin projector, a Gaussian filter and a novel class of Galerkin projectors with non-sharp spectral filter profiles. Of interest is the importance of Galilean invariance and we confirm that local SFS energy transfer displays intermittency scaling in both skewness and flatness as a function of the cutoff scale. Furthermore, we quantify the robustness of scaling as a function of the filtering type.
Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.
Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming
2014-10-17
The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong
2018-06-01
Accurate detection of maritime targets in infrared imagery under various sea clutter conditions is always a challenging task. The fractional Fourier transform (FRFT) is the extension of the Fourier transform in the fractional order, and has richer spatial-frequency information. By combining it with the high order statistic filtering, a new ship detection method is proposed. First, the proper range of angle parameter is determined to make it easier for the ship components and background to be separated. Second, a new high order statistic curve (HOSC) at each fractional frequency point is designed. It is proved that maximal peak interval in HOSC reflects the target information, while the points outside the interval reflect the background. And the value of HOSC relative to the ship is much bigger than that to the sea clutter. Then, search the curve's maximal target peak interval and extract the interval by bandpass filtering in fractional Fourier domain. The value outside the peak interval of HOSC decreases rapidly to 0, so the background is effectively suppressed. Finally, the detection result is obtained by the double threshold segmenting and the target region selection method. The results show the proposed method is excellent for maritime targets detection with high clutters.
NASA Astrophysics Data System (ADS)
Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean
2012-06-01
A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.
Selected annotated bibliographies for adaptive filtering of digital image data
Mayers, Margaret; Wood, Lynnette
1988-01-01
Digital spatial filtering is an important tool both for enhancing the information content of satellite image data and for implementing cosmetic effects which make the imagery more interpretable and appealing to the eye. Spatial filtering is a context-dependent operation that alters the gray level of a pixel by computing a weighted average formed from the gray level values of other pixels in the immediate vicinity.Traditional spatial filtering involves passing a particular filter or set of filters over an entire image. This assumes that the filter parameter values are appropriate for the entire image, which in turn is based on the assumption that the statistics of the image are constant over the image. However, the statistics of an image may vary widely over the image, requiring an adaptive or "smart" filter whose parameters change as a function of the local statistical properties of the image. Then a pixel would be averaged only with more typical members of the same population. This annotated bibliography cites some of the work done in the area of adaptive filtering. The methods usually fall into two categories, (a) those that segment the image into subregions, each assumed to have stationary statistics, and use a different filter on each subregion, and (b) those that use a two-dimensional "sliding window" to continuously estimate the filter either the spatial or frequency domain, or may utilize both domains. They may be used to deal with images degraded by space variant noise, to suppress undesirable local radiometric statistics while enforcing desirable (user-defined) statistics, to treat problems where space-variant point spread functions are involved, to segment images into regions of constant value for classification, or to "tune" images in order to remove (nonstationary) variations in illumination, noise, contrast, shadows, or haze.Since adpative filtering, like nonadaptive filtering, is used in image processing to accomplish various goals, this bibliography is organized in subsections based on application areas. Contrast enhancement, edge enhancement, noise suppression, and smoothing are typically performed in order imaging process, (for example, degradations due to the optics and electronics of the sensor, or to blurring caused by the intervening atmosphere, uniform motion, or defocused optics). Some of the papers listed may apply to more than one of the above categories; when this happens the paper is listed under the category for which the paper's emphasis is greatest. A list of survey articles is also supplied. These articles are general discussions on adaptive filters and reviews of work done. Finally, a short list of miscellaneous articles are listed which were felt to be sufficiently important to be included, but do not fit into any of the above categories. This bibliography, listing items published from 1970 through 1987, is extensive, but by no means complete. It is intended as a guide for scientists and image analysts, listing references for background information as well as areas of significant development in adaptive filtering.
ECG artifact cancellation in surface EMG signals by fractional order calculus application.
Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B
2017-03-01
New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Texture classification using autoregressive filtering
NASA Technical Reports Server (NTRS)
Lawton, W. M.; Lee, M.
1984-01-01
A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
Long-term effects on symptoms by reducing electric fields from visual display units.
Oftedal, G; Nyvang, A; Moen, B E
1999-10-01
The purpose of the study was to see whether the results of an earlier study [ie, that skin symptoms were reduced by reducing electric fields from visual display units (VDU)] could be reproduced or not. In addition, an attempt was made to determine whether eye symptoms and symptoms from the nervous system could be reduced by reducing VDU electric fields. The study was designed as a controlled double-blind intervention. The electric fields were reduced by using electric-conducting screen filters. Forty-two persons completed the study while working at their ordinary job, first 1 week with no filter, then 3 months with an inactive filter and then 3 months with an active filter (or in reverse order). The inactive filters were identical to the active ones, except that their ground cables were replaced by empty plastic insulation. The inactive filters did not reduce the fields from the VDU. The fields were significantly lower with active filters than with inactive filters. Most of the symptoms were statistically significantly less pronounced in the periods with the filters when compared with the period with no filter. This finding can be explained by visual effects and psychological effects. No statistically significant difference in symptom severeness was observed between the period with an inactive filter and the one with an active filter. The study does not support the hypothesis that skin, eye, or nervous system symptoms can be reduced by reducing VDU electric fields.
Speckle noise reduction in SAR images ship detection
NASA Astrophysics Data System (ADS)
Yuan, Ji; Wu, Bin; Yuan, Yuan; Huang, Qingqing; Chen, Jingbo; Ren, Lin
2012-09-01
At present, there are two types of method to detect ships in SAR images. One is a direct detection type, detecting ships directly. The other is an indirect detection type. That is, it firstly detects ship wakes, and then seeks ships around wakes. The two types all effect by speckle noise. In order to improve the accuracy of ship detection and get accurate ship and ship wakes parameters, such as ship length, ship width, ship area, the angle of ship wakes and ship outline from SAR images, it is extremely necessary to remove speckle noise in SAR images before data used in various SAR images ship detection. The use of speckle noise reduction filter depends on the specification for a particular application. Some common filters are widely used in speckle noise reduction, such as the mean filter, the median filter, the lee filter, the enhanced lee filter, the Kuan filter, the frost filter, the enhanced frost filter and gamma filter, but these filters represent some disadvantages in SAR image ship detection because of the various types of ship. Therefore, a mathematical function known as the wavelet transform and multi-resolution analysis were used to localize an SAR ocean image into different frequency components or useful subbands, and effectively reduce the speckle in the subbands according to the local statistics within the bands. Finally, the analysis of the statistical results are presented, which demonstrates the advantages and disadvantages of using wavelet shrinkage techniques over standard speckle filters.
The application of dummy noise adaptive Kalman filter in underwater navigation
NASA Astrophysics Data System (ADS)
Li, Song; Zhang, Chun-Hua; Luan, Jingde
2011-10-01
The track of underwater target is easy to be affected by the various by the various factors, which will cause poor performance in Kalman filter with the error in the state and measure model. In order to solve the situation, a method is provided with dummy noise compensative technology. Dummy noise is added to state and measure model artificially, and then the question can be solved by the adaptive Kalman filter with unknown time-changed statistical character. The simulation result of underwater navigation proves the algorithm is effective.
Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B
2016-05-21
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Scale-by-scale contributions to Lagrangian particle acceleration
NASA Astrophysics Data System (ADS)
Lalescu, Cristian C.; Wilczek, Michael
2017-11-01
Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.
Adaptive nonlinear L2 and L3 filters for speckled image processing
NASA Astrophysics Data System (ADS)
Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.
1997-04-01
Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.
Speckle Filtering of GF-3 Polarimetric SAR Data with Joint Restriction Principle.
Xie, Jinwei; Li, Zhenfang; Zhou, Chaowei; Fang, Yuyuan; Zhang, Qingjun
2018-05-12
Polarimetric SAR (PolSAR) scattering characteristics of imagery are always obtained from the second order moments estimation of multi-polarization data, that is, the estimation of covariance or coherency matrices. Due to the extra-paths that signal reflected from separate scatterers within the resolution cell has to travel, speckle noise always exists in SAR images and has a severe impact on the scattering performance, especially on single look complex images. In order to achieve high accuracy in estimating covariance or coherency matrices, three aspects are taken into consideration: (1) the edges and texture of the scene are distinct after speckle filtering; (2) the statistical characteristic should be similar to the object pixel; and (3) the polarimetric scattering signature should be preserved, in addition to speckle reduction. In this paper, a joint restriction principle is proposed to meet the requirement. Three different restriction principles are introduced to the processing of speckle filtering. First, a new template, which is more suitable for the point or line targets, is designed to ensure the morphological consistency. Then, the extent sigma filter is used to restrict the pixels in the template aforementioned to have an identical statistic characteristic. At last, a polarimetric similarity factor is applied to the same pixels above, to guarantee the similar polarimetric features amongst the optional pixels. This processing procedure is named as speckle filtering with joint restriction principle and the approach is applied to GF-3 polarimetric SAR data acquired in San Francisco, CA, USA. Its effectiveness of keeping the image sharpness and preserving the scattering mechanism as well as speckle reduction is validated by the comparison with boxcar filters and refined Lee filter.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten times as long* as the latter. The filter correction step also furnishes a statistically rigorous *prediction error* which appears in the likelihood ratios for scoring the association of one report or observation to another. Thus, the new filter can be used to support multi-target tracking within a general multiple hypothesis tracking framework. Additionally, the new distribution admits a distance metric which extends the classical Mahalanobis distance (chi^2 statistic). This metric provides a test for statistical significance and facilitates single-frame data association methods with the potential to easily extend the covariance-based track association algorithm of Hill, Sabol, and Alfriend. The filtering, data fusion, and association methods using the new class of orbital state PDFs are shown to be mathematically tractable and operationally viable.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
1982-12-01
Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient
2018-01-01
statistical moments of order 2, 3, and 4. The probability density function (PDF) of the vibrational time series of a good bearing has a Gaussian...ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...when it is no longer needed. Do not return it to the originator. ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
Shadow Probability of Detection and False Alarm for Median-Filtered SAR Imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raynal, Ann Marie; Doerry, Armin Walter; Miller, John A.
2014-06-01
Median filtering reduces speckle in synthetic aperture radar (SAR) imagery while preserving edges, at the expense of coarsening the resolution, by replacing the center pixel of a sliding window by the median value. For shadow detection, this approach helps distinguish shadows from clutter more easily, while preserving shadow shape delineations. However, the nonlinear operation alters the shadow and clutter distributions and statistics, which must be taken into consideration when computing probability of detection and false alarm metrics. Depending on system parameters, median filtering can improve probability of detection and false alarm by orders of magnitude. Herein, we examine shadow probabilitymore » of detection and false alarm in a homogeneous, ideal clutter background after median filter post-processing. Some comments on multi-look processing effects with and without median filtering are also made.« less
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Unscented Kalman Filter for Brain-Machine Interfaces
Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.
2009-01-01
Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074
Alternatives to an extended Kalman Filter for target image tracking
NASA Astrophysics Data System (ADS)
Leuthauser, P. R.
1981-12-01
Four alternative filters are compared to an extended Kalman filter (EKF) algorithm for tracking a distributed (elliptical) source target in a closed loop tracking problem, using outputs from a forward looking (FLIR) sensor as measurements. These were (1) an EKF with (second order) bias correction term, (2) a constant gain EKF, (3) a constant gain EKF with bias correction term, and (4) a statistically linearized filter. Estimates are made of both actual target motion and of apparent motion due to atmospheric jitter. These alternative designs are considered specifically to address some of the significant biases exhibited by an EKF due to initial acquisition difficulties, unmodelled maneuvering by the target, low signal-to-noise ratio, and real world conditions varying significantly from those assumed in the filter design (robustness). Filter performance was determined with a Monte Carlo study under both ideal and non ideal conditions for tracking targets on a constant velocity cross range path, and during constant acceleration turns of 5G, 10G, and 20G.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
Removal of impulse noise clusters from color images with local order statistics
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
Space Object Maneuver Detection Algorithms Using TLE Data
NASA Astrophysics Data System (ADS)
Pittelkau, M.
2016-09-01
An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.
NASA Astrophysics Data System (ADS)
Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J.
2017-03-01
The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold-Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
Effects of Non-Normal Outlier-Prone Error Distribution on Kalman Filter Track
1991-09-01
other possibilities exist. For example the GST (Generic Statistical Tracker) uses four motion models [Ref. 41. The GST keeps track of both the target...1.011 + + + 3.113 1.291 4 Although this procedure is not easily statistically interpretable, it was used for the sake of comparison with the other... TRANSITOR TARGET’ WRITE(6,*)’ 3 SECOND ORDER GAUSS MARKOV TARGET’ WRITE(6,*)’ 4 RANDOM TOUR TARGET’ READ(6,*) CHOICE IF((CHOICE.LT.1).OR.(CHOICE.GT.4
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1991-01-01
The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.
Applications of Kalman filtering to real-time trace gas concentration measurements
NASA Technical Reports Server (NTRS)
Leleux, D. P.; Claps, R.; Chen, W.; Tittel, F. K.; Harman, T. L.
2002-01-01
A Kalman filtering technique is applied to the simultaneous detection of NH3 and CO2 with a diode-laser-based sensor operating at 1.53 micrometers. This technique is developed for improving the sensitivity and precision of trace gas concentration levels based on direct overtone laser absorption spectroscopy in the presence of various sensor noise sources. Filter performance is demonstrated to be adaptive to real-time noise and data statistics. Additionally, filter operation is successfully performed with dynamic ranges differing by three orders of magnitude. Details of Kalman filter theory applied to the acquired spectroscopic data are discussed. The effectiveness of this technique is evaluated by performing NH3 and CO2 concentration measurements and utilizing it to monitor varying ammonia and carbon dioxide levels in a bioreactor for water reprocessing, located at the NASA-Johnson Space Center. Results indicate a sensitivity enhancement of six times, in terms of improved minimum detectable absorption by the gas sensor.
Building Hybrid Rover Models for NASA: Lessons Learned
NASA Technical Reports Server (NTRS)
Willeke, Thomas; Dearden, Richard
2004-01-01
Particle filters have recently become popular for diagnosis and monitoring of hybrid systems. In this paper we describe our experiences using particle filters on a real diagnosis problem, the NASA Ames Research Center's K-9 rover. As well as the challenge of modelling the dynamics of the system, there are two major issues in applying a particle filter to such a model. The first is the asynchronous nature of the system-observations from different subsystems arrive at different rates, and occasionally out of order, leading to large amounts of uncertainty in the state of the system. The second issue is data interpretation. The particle filter produces a probability distribution over the state of the system, from which summary statistics that can be used for control or higher-level diagnosis must be extracted. We describe our approaches to both these problems, as well as other modelling issues that arose in this domain.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
Robust Lee local statistic filter for removal of mixed multiplicative and impulse noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Astola, Jaakko T.
2004-05-01
A robust version of Lee local statistic filter able to effectively suppress the mixed multiplicative and impulse noise in images is proposed. The performance of the proposed modification is studied for a set of test images, several values of multiplicative noise variance, Gaussian and Rayleigh probability density functions of speckle, and different characteris-tics of impulse noise. The advantages of the designed filter in comparison to the conventional Lee local statistic filter and some other filters able to cope with mixed multiplicative+impulse noise are demonstrated.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Higher-order scene statistics of breast images
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.
2009-02-01
Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
NASA Technical Reports Server (NTRS)
Melott, A. L.; Buchert, T.; Weib, A. G.
1995-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.
Adaptive filtering in biological signal processing.
Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A
1990-01-01
The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.
Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C
2010-01-01
A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Frequency dependence of sensitivities in second-order RC active filters
NASA Astrophysics Data System (ADS)
Kunieda, T.; Hiramatsu, Y.; Fukui, A.
1980-02-01
This paper presents that gain and phase sensitivities to some element in biquadratic filters approximately constitute a circle on the complex sensitivity plane, provided that the quality factor Q of the circuit is appreciably larger than unity. Moreover, the group delay sensitivity is represented by the imaginary part of a cardioid. Using these results, bounds of maximum values of gain, phase, and group delay sensitivities are obtained. Further, it is proved that the maximum values of these sensitivities can be simultaneously minimized by minimizing the absolute value of the transfer function sensitivity at the center frequency provided that w(0)-sensitivities are constant and do not contain design parameters. Next, a statistical variability measure for the optimal-filter design is proposed. Finally, the relation between some variability measures proposed to the present time is made clear.
In vitro evaluation of clot capture efficiency of an absorbable vena cava filter.
Dria, Stephen J; Eggers, Mitchell D
2016-10-01
The purpose of this study was to determine the in vitro clot capture efficiency (CCE) of an investigational absorbable inferior vena cava filter (IVCF) vs the Greenfield IVCF. Investigational absorbable and Greenfield filters were challenged with polyacrylamide clot surrogates ranging from 3 × 5 to 10 × 24 mm (diameter × length) in a flow loop simulating the venous system. Filters were challenged with clots until CCE standard error of 5% or less was achieved under binomial statistics. Pressure gradients across the filters were measured for the largest size clot, enabling calculation of forces on the filter. The in vitro CCE of the absorbable IVCF was statistically similar to that of the Greenfield filter for all clot sizes apart from the 3 × 10-mm clot, for which there was statistically significant difference between filter CCEs (absorbable filter, 59%; Greenfield filter, 31%; P = .0001). CCE ranged from an average 32% for the 3 × 5-mm clot to 100% for 7 × 10-mm and larger clots for the absorbable IVCF. Pressure gradient across the absorbable filter with 10 × 24-mm clot averaged 0.14 mm Hg, corresponding to a net force on the filter of 2.1 × 10(-3) N, compared with 0.39 mm Hg or 5.8 × 10(-3) N (P < .001) for the Greenfield filter. CCE of the absorbable filter was statistically similar to or an improvement on that of the Greenfield stainless steel filter for all clot sizes tested. CCE of the Greenfield filter in this study aligned with data from previous studies. Given the efficacy of the Greenfield filter in attenuating the risk of pulmonary embolism, the current study suggests that the absorbable filter may be a viable candidate for subsequent human testing. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.
2017-04-01
A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
NASA Technical Reports Server (NTRS)
Penland, Cecile; Ghil, Michael; Weickmann, Klaus M.
1991-01-01
The spectral resolution and statistical significance of a harmonic analysis obtained by low-order MEM can be improved by subjecting the data to an adaptive filter. This adaptive filter consists of projecting the data onto the leading temporal empirical orthogonal functions obtained from singular spectrum analysis (SSA). The combined SSA-MEM method is applied both to a synthetic time series and a time series of AAM data. The procedure is very effective when the background noise is white and less so when the background noise is red. The latter case obtains in the AAM data. Nevertheless, reliable evidence for intraseasonal and interannual oscillations in AAM is detected. The interannual periods include a quasi-biennial one and an LF one, of 5 years, both related to the El Nino/Southern Oscillation. In the intraseasonal band, separate oscillations of about 48.5 and 51 days are ascertained.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
Smith, S Christian; Shanks, Candace; Guy, Gregory; Yang, Xiangyu; Dowell, Joshua D
2015-10-01
Retrievable inferior vena cava filters (IVCFs) are associated with long-term adverse events that have increased interest in improving filter retrieval rates. Determining the influential patient social and demographic factors affecting IVCF retrieval is important to personalize patient management strategies and attain optimal patient care. Seven-hundred and sixty-two patients were retrospectively studied who had a filter placed at our institution between January 2011 and November 2013. Age, gender, race, cancer history, distance to residence from retrieval institution, and insurance status were identified for each patient, and those receiving retrievable IVCFs were further evaluated for retrieval rate and time to retrieval. Of the 762 filters placed, 133 were permanent filters. Of the 629 retrievable filters placed, 406 met the inclusion criteria and were eligible for retrieval. Results revealed patients with Medicare were less likely to have their filters retrieved (p = 0.031). Older age was also associated with a lower likelihood of retrieval (p < 0.001) as was living further from the medical center (p = 0.027). Patients who were white and had Medicare were more likely than similarly insured black patients to have their filters retrieved (p = 0.024). The retrieval rate of IVCFs was most influenced by insurance status, distance from the medical center, and age. Race was statistically significant only when combined with insurance status. The results of this study suggest that these patient groups may need closer follow-up in order to obtain optimal IVCF retrieval rates.
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
Tuning the photon statistics of a strongly coupled nanophotonic system
NASA Astrophysics Data System (ADS)
Dory, Constantin; Fischer, Kevin A.; Müller, Kai; Lagoudakis, Konstantinos G.; Sarmiento, Tomas; Rundquist, Armand; Zhang, Jingyuan L.; Kelaita, Yousif; Sapra, Neil V.; Vučković, Jelena
2017-02-01
We investigate the dynamics of single- and multiphoton emission from detuned strongly coupled systems based on the quantum-dot-photonic-crystal resonator platform. Transmitting light through such systems can generate a range of nonclassical states of light with tunable photon counting statistics due to the nonlinear ladder of hybridized light-matter states. By controlling the detuning between emitter and resonator, the transmission can be tuned to strongly enhance either single- or two-photon emission processes. Despite the strongly dissipative nature of these systems, we find that by utilizing a self-homodyne interference technique combined with frequency filtering we are able to find a strong two-photon component of the emission in the multiphoton regime. In order to explain our correlation measurements, we propose rate equation models that capture the dominant processes of emission in both the single- and multiphoton regimes. These models are then supported by quantum-optical simulations that fully capture the frequency filtering of emission from our solid-state system.
NASA Astrophysics Data System (ADS)
Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.
2016-02-01
Spatial filtering is an important technique for reducing sky background noise in a satellite quantum key distribution downlink receiver. Atmospheric turbulence limits the extent to which spatial filtering can reduce sky noise without introducing signal losses. Using atmospheric propagation and compensation simulations, the potential benefit of adaptive optics (AO) to secure key generation (SKG) is quantified. Simulations are performed assuming optical propagation from a low-Earth-orbit satellite to a terrestrial receiver that includes AO. Higher-order AO correction is modeled assuming a Shack-Hartmann wavefront sensor and a continuous-face-sheet deformable mirror. The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain wave-optics hardware emulator. SKG rates are calculated for a decoy-state protocol as a function of the receiver field of view for various strengths of turbulence, sky radiances, and pointing angles. The results show that at fields of view smaller than those discussed by others, AO technologies can enhance SKG rates in daylight and enable SKG where it would otherwise be prohibited as a consequence of background optical noise and signal loss due to propagation and turbulence effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, S.; Wong, K.V.; Nemerow, N.
Characterization of the following waste streams: air-classified light (ACL), digester slurry, filter cake, filtrate, washwater input and washwater effluent has been made for the Refcom facility in order to assess the effects of these waste streams, if discharged into the environment. Special laboratory studies to evaluate the effect of plastics on anaerobic digestion have been undertaken. A separate report has been furnished describing the studies of lab-model digesters. Data collected for ACL has been statistically analyzed.
Tuning the Photon Statistics of a Strongly Coupled Nanophotonic System
NASA Astrophysics Data System (ADS)
Dory, C.; Fischer, K. A.; Müller, K.; Lagoudakis, K. G.; Sarmiento, T.; Rundquist, A.; Zhang, J. L.; Kelaita, Y.; Sapra, N. V.; Vučković, J.
Strongly coupled quantum-dot-photonic-crystal cavity systems provide a nonlinear ladder of hybridized light-matter states, which are a promising platform for non-classical light generation. The transmission of light through such systems enables light generation with tunable photon counting statistics. By detuning the frequencies of quantum emitter and cavity, we can tune the transmission of light to strongly enhance either single- or two-photon emission processes. However, these nanophotonic systems show a strongly dissipative nature and classical light obscures any quantum character of the emission. In this work, we utilize a self-homodyne interference technique combined with frequency-filtering to overcome this obstacle. This allows us to generate emission with a strong two-photon component in the multi-photon regime, where we measure a second-order coherence value of g (2) [ 0 ] = 1 . 490 +/- 0 . 034 . We propose rate equation models that capture the dominant processes of emission both in the single- and multi-photon regimes and support them by quantum-optical simulations that fully capture the frequency filtering of emission from our solid-state system. Finally, we simulate a third-order coherence value of g (3) [ 0 ] = 0 . 872 +/- 0 . 021 . Army Research Office (ARO) (W911NF1310309), National Science Foundation (1503759), Stanford Graduate Fellowship.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
... Filter July 30, 2013. Pursuant to Section 19(b)(1) of the Securities Exchange Act of 1934 (``Act''),\\1... filtering inbound Complex Orders \\3\\ (the ``Complex Order Filter''). The proposed rule change would make the... proposed Complex Order Filter will simplify the filtering procedure, provide greater flexibility to...
Performance of innovative textile biofilters for domestic wastewater treatment.
Spychała, Marcin; Błazejewski, Ryszard; Nawrot, Tadeusz
2013-01-01
Two types of geotextile, TS 50 and TC/PP 300, were investigated as experimental filters. The raw wastewater, pre-treated in a septic tank, was intermittently dosed and filtered under hydrostatic pressure. At the beginning, the filter reactor comprised nine filters made of geotextiles (of three types: TS 10, TS 50 and TC/PP 300). At the end of the start-up period the TS 10 filters were removed due to their high outflow instability. After four months of working, the hydraulic capacities of the remaining filters were: 3.23 cm3/cm2/d for TS 50 and 4.14 cm3/cm2/d for TC/PP 300. The efficiencies of COD and BOD5 removal were similar for both types of geotextile (COD: 64%, BOD5: 80%). A small but statistically significant difference between ammonium nitrogen removal was observed (40% for TS 50 and 35% for TC/PP 300), most probably due to their different structure. Biological removal of P(tot) was relatively poor and similar for both geotextile types. The mean concentration of matter accumulated on the geotextiles was over one order of magnitude higher than conventional activated sludge concentrations. During the last weeks of the experiments the values of basic pollution indicators in the effluent were lower than the maximum permissible values (according to Polish law).
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-20
... Filter September 16, 2013. I. Introduction On July 22, 2013, BOX Options Exchange LLC (the ``Exchange... included in the HSVF. A. Complex Order Filter BOX's Complex Order Filter provides a process designed to....\\4\\ BOX proposes to revise its rules to specifically provide that the Complex Order Filter operates...
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Nonlinear estimation theory applied to the interplanetary orbit determination problem.
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1972-01-01
Martingale theory and appropriate smoothing properties of Loeve (1953) have been used to develop a modified Gaussian second-order filter. The performance of the filter is evaluated through numerical simulation of a Jupiter flyby mission. The observations used in the simulation are on-board measurements of the angle between Jupiter and a fixed star taken at discrete time intervals. In the numerical study, the influence of each of the second-order terms is evaluated. Five filter algorithms are used in the simulations. Four of the filters are the modified Gaussian second-order filter and three approximations derived by neglecting one or more of the second-order terms in the equations. The fifth filter is the extended Kalman-Bucy filter which is obtained by neglecting all of the second-order terms.
Statistical strategy for anisotropic adventitia modelling in IVUS.
Gil, Debora; Hernández, Aura; Rodriguez, Oriol; Mauri, Josepa; Radeva, Petia
2006-06-01
Vessel plaque assessment by analysis of intravascular ultrasound sequences is a useful tool for cardiac disease diagnosis and intervention. Manual detection of luminal (inner) and media-adventitia (external) vessel borders is the main activity of physicians in the process of lumen narrowing (plaque) quantification. Difficult definition of vessel border descriptors, as well as, shades, artifacts, and blurred signal response due to ultrasound physical properties trouble automated adventitia segmentation. In order to efficiently approach such a complex problem, we propose blending advanced anisotropic filtering operators and statistical classification techniques into a vessel border modelling strategy. Our systematic statistical analysis shows that the reported adventitia detection achieves an accuracy in the range of interobserver variability regardless of plaque nature, vessel geometry, and incomplete vessel borders.
Optimizing of a high-order digital filter using PSO algorithm
NASA Astrophysics Data System (ADS)
Xu, Fuchun
2018-04-01
A self-adaptive high-order digital filter, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance, is presented in this paper. The parameters of traditional digital filter are mainly tuned by complex calculation, whereas this paper presents a 5th order digital filter to obtain outstanding performance and the parameters of the proposed filter are optimized by swarm intelligent algorithm. Simulation results with respect to the proposed 5th order digital filter, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed 5th order digital is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, S. Christian, E-mail: csmith@aemrc.arizona.edu; Shanks, Candace, E-mail: Candace.Shanks@osumc.edu; Guy, Gregory, E-mail: Gregory.Guy@osumc.edu
PurposeRetrievable inferior vena cava filters (IVCFs) are associated with long-term adverse events that have increased interest in improving filter retrieval rates. Determining the influential patient social and demographic factors affecting IVCF retrieval is important to personalize patient management strategies and attain optimal patient care.Materials and MethodsSeven-hundred and sixty-two patients were retrospectively studied who had a filter placed at our institution between January 2011 and November 2013. Age, gender, race, cancer history, distance to residence from retrieval institution, and insurance status were identified for each patient, and those receiving retrievable IVCFs were further evaluated for retrieval rate and time to retrieval.ResultsOfmore » the 762 filters placed, 133 were permanent filters. Of the 629 retrievable filters placed, 406 met the inclusion criteria and were eligible for retrieval. Results revealed patients with Medicare were less likely to have their filters retrieved (p = 0.031). Older age was also associated with a lower likelihood of retrieval (p < 0.001) as was living further from the medical center (p = 0.027). Patients who were white and had Medicare were more likely than similarly insured black patients to have their filters retrieved (p = 0.024).ConclusionsThe retrieval rate of IVCFs was most influenced by insurance status, distance from the medical center, and age. Race was statistically significant only when combined with insurance status. The results of this study suggest that these patient groups may need closer follow-up in order to obtain optimal IVCF retrieval rates.« less
Poisson filtering of laser ranging data
NASA Technical Reports Server (NTRS)
Ricklefs, Randall L.; Shelus, Peter J.
1993-01-01
The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1985-01-01
The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.
Spacecraft attitude determination using a second-order nonlinear filter
NASA Technical Reports Server (NTRS)
Vathsal, S.
1987-01-01
The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.
Zipf, Mariah Siebert; Pinheiro, Ivone Gohr; Conegero, Mariana Garcia
2016-07-01
One of the main actions of sustainability that is applicable to residential, commercial, and public buildings is the rational use of water that contemplates the reuse of greywater as one of the main options for reducing the consumption of drinking water. Therefore, this research aimed to study the efficiencies of simplified treatments for greywater reuse using slow sand and slow slate waste filtration, both followed by granular activated carbon filters. The system monitoring was conducted over 28 weeks, using analyses of the following parameters: pH, turbidity, apparent color, biochemical oxygen demand (BOD), chemical oxygen demand (COD), surfactants, total coliforms, and thermotolerant coliforms. The system was run at two different filtration rates: 6 and 2 m(3)/m(2)/day. Statistical analyses showed no significant differences in the majority of the results when filtration rate changed from 6 to 2 m(3)/m(2)/day. The average removal efficiencies with regard to the turbidity, apparent color, COD and BOD were 61, 54, 56, and 56%, respectively, for the sand filter, and 66, 61, 60, and 51%, respectively, for the slate waste filter. Both systems showed good efficiencies in removing surfactants, around 70%, while the pH reached values of around 7.80. The average removal efficiencies of the total and thermotolerant coliforms were of 61 and 90%, respectively, for the sand filter, and 67 and 80%, respectively, for the slate waste filter. The statistical analysis found no significant differences between the responses of the two systems, which attest to the fact that the slate waste can be a substitute for sand. The maximum levels of efficiency were high, indicating the potential of the systems, and suggesting their optimization in order to achieve much higher average efficiencies. Copyright © 2016 Elsevier Ltd. All rights reserved.
PERFORMANCE OF TRICKLING FILTER PLANTS: RELIABILITY, STABILITY, VARIABILITY
Effluent quality variability from trickling filters was examined in this study by statistically analyzing daily effluent BOD5 and suspended solids data from 11 treatment plants. Summary statistics (mean, standard deviation, etc.) were examined to determine the general characteris...
Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu Zhengcheng; Wen Xiaogang
2009-10-15
We study the renormalization group flow of the Lagrangian for statistical and quantum systems by representing their path integral in terms of a tensor network. Using a tensor-entanglement-filtering renormalization approach that removes local entanglement and produces a coarse-grained lattice, we show that the resulting renormalization flow of the tensors in the tensor network has a nice fixed-point structure. The isolated fixed-point tensors T{sub inv} plus the symmetry group G{sub sym} of the tensors (i.e., the symmetry group of the Lagrangian) characterize various phases of the system. Such a characterization can describe both the symmetry breaking phases and topological phases, asmore » illustrated by two-dimensional (2D) statistical Ising model, 2D statistical loop-gas model, and 1+1D quantum spin-1/2 and spin-1 models. In particular, using such a (G{sub sym},T{sub inv}) characterization, we show that the Haldane phase for a spin-1 chain is a phase protected by the time-reversal, parity, and translation symmetries. Thus the Haldane phase is a symmetry-protected topological phase. The (G{sub sym},T{sub inv}) characterization is more general than the characterizations based on the boundary spins and string order parameters. The tensor renormalization approach also allows us to study continuous phase transitions between symmetry breaking phases and/or topological phases. The scaling dimensions and the central charges for the critical points that describe those continuous phase transitions can be calculated from the fixed-point tensors at those critical points.« less
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
Bias Reduction and Filter Convergence for Long Range Stereo
NASA Technical Reports Server (NTRS)
Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav
2005-01-01
We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.
A statistical package for computing time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Brownlow, J.
1978-01-01
The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters
NASA Astrophysics Data System (ADS)
Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong
2005-04-01
Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.
Raymond L. Czaplewski
2015-01-01
Wall-to-wall remotely sensed data are increasingly available to monitor landscape dynamics over large geographic areas. However, statistical monitoring programs that use post-stratification cannot fully utilize those sensor data. The Kalman filter (KF) is an alternative statistical estimator. I develop a new KF algorithm that is numerically robust with large numbers of...
An optimal filter for short photoplethysmogram signals
Liang, Yongbo; Elgendi, Mohamed; Chen, Zhencheng; Ward, Rabab
2018-01-01
A photoplethysmogram (PPG) contains a wealth of cardiovascular system information, and with the development of wearable technology, it has become the basic technique for evaluating cardiovascular health and detecting diseases. However, due to the varying environments in which wearable devices are used and, consequently, their varying susceptibility to noise interference, effective processing of PPG signals is challenging. Thus, the aim of this study was to determine the optimal filter and filter order to be used for PPG signal processing to make the systolic and diastolic waves more salient in the filtered PPG signal using the skewness quality index. Nine types of filters with 10 different orders were used to filter 219 (2.1s) short PPG signals. The signals were divided into three categories by PPG experts according to their noise levels: excellent, acceptable, or unfit. Results show that the Chebyshev II filter can improve the PPG signal quality more effectively than other types of filters and that the optimal order for the Chebyshev II filter is the 4th order. PMID:29714722
Single photon laser altimeter data processing, analysis and experimental validation
NASA Astrophysics Data System (ADS)
Vacek, Michael; Peca, Marek; Michalek, Vojtech; Prochazka, Ivan
2015-10-01
Spaceborne laser altimeters are common instruments on-board the rendezvous spacecraft. This manuscript deals with the altimeters using a single photon approach, which belongs to the family of time-of-flight range measurements. Moreover, the single photon receiver part of the altimeter may be utilized as an Earth-to-spacecraft link enabling one-way ranging, time transfer and data transfer. The single photon altimeters evaluate actual altitude through the repetitive detections of single photons of the reflected laser pulses. We propose the single photon altimeter signal processing and data mining algorithm based on the Poisson statistic filter (histogram method) and the modified Kalman filter, providing all common altimetry products (altitude, slope, background photon flux and albedo). The Kalman filter is extended for the background noise filtering, the varying slope adaptation and the non-causal extension for an abrupt slope change. Moreover, the algorithm partially removes the major drawback of a single photon altitude reading, namely that the photon detection measurement statistics must be gathered. The developed algorithm deduces the actual altitude on the basis of a single photon detection; thus, being optimal in the sense that each detected signal photon carrying altitude information is tracked and no altitude information is lost. The algorithm was tested on the simulated datasets and partially cross-probed with the experimental data collected using the developed single photon altimeter breadboard based on the microchip laser with the pulse energy on the order of microjoule and the repetition rate of several kilohertz. We demonstrated that such an altimeter configuration may be utilized for landing or hovering a small body (asteroid, comet).
Estimation of images degraded by film-grain noise.
Naderi, F; Sawchuk, A A
1978-04-15
Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.
Australian Oceanographic Data Centre Bulletin 16.
1983-05-01
iable that with the quantities of data involved sonic bad data will be archived. In order to exclude this various filtering techniques will be employed. 4...analysed for statistical properties (e.g. burst nican. variance, exceedance and spectral properties) and certain values are correlated with relevant forcing...seconds) < DAY N 0 : 281 z. -15 ,E: o E < INSTRUMENT: MMI 585 .- X AXIS BEARING: 280 0 DATA POINT Z MEAN RESOLVED CURRENT - 15 MAGNITUDE: 7. 1 Cm/s
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, O. V.; Ushenko, Vladimir A.; Ushenko, Yu. A.; Sakhnovskiy, M. Yu.; Prydiy, O. G.; Lakusta, I. I.; Novakovskaya, O. Yu.; Melenko, S. R.
2016-12-01
This research presents investigation results of diagnostic efficiency of a new azimuthally stable Mueller-matrix method of laser autofluorescence coordinate distributions analysis of dried polycrystalline films of uterine cavity peritoneal fluid. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of dried polycrystalline films of peritoneal fluid - group 1 (healthy donors) and group 2 (uterus endometriosis patients) are estimated.
Guenter Tulip Filter Retrieval Experience: Predictors of Successful Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turba, Ulku Cenk, E-mail: uct5d@virginia.edu; Arslan, Bulent, E-mail: ba6e@virginia.edu; Meuse, Michael, E-mail: mm5tz@virginia.edu
We report our experience with Guenter Tulip filter placement indications, retrievals, and procedural problems, with emphasis on alternative retrieval techniques. We have identified 92 consecutive patients in whom a Guenter Tulip filter was placed and filter removal attempted. We recorded patient demographic information, filter placement and retrieval indications, procedures, standard and nonstandard filter retrieval techniques, complications, and clinical outcomes. The mean time to retrieval for those who experienced filter strut penetration was statistically significant [F(1,90) = 8.55, p = 0.004]. Filter strut(s) IVC penetration and successful retrieval were found to be statistically significant (p = 0.043). The filter hook-IVC relationshipmore » correlated with successful retrieval. A modified guidewire loop technique was applied in 8 of 10 cases where the hook appeared to penetrate the IVC wall and could not be engaged with a loop snare catheter, providing additional technical success in 6 of 8 (75%). Therefore, the total filter retrieval success increased from 88 to 95%. In conclusion, the Guenter Tulip filter has high successful retrieval rates with low rates of complication. Additional maneuvers such as a guidewire loop method can be used to improve retrieval success rates when the filter hook is endothelialized.« less
Band-pass filters based on photonic crystal
NASA Astrophysics Data System (ADS)
Khodenkov, S. A.; Yushkov, I. A.
2017-11-01
Multilayer photonic crystal structures with bleaching layers are being investigated. In order to calculate the characteristics of ultra-wideband filters on their basis, T-lines lossless model was used. Amplitude-frequency characteristics for the synthesized filters of 5th, 11th and 17th orders are given. It is proved that by a significant increase in filter N order, the difference between the connection coefficients of central resonators’ layers’ becomes negligible. This makes it possible to develop 27-order filter, in which almost half of the layers are realized by periodic interchange of only two identical high-contrast materials. The investigated band-pass filters, including the ones on a glass substrate, have high frequency-selective properties at a relative bandwidth of 80%.
NASA Astrophysics Data System (ADS)
Fathy, Ibrahim
2016-07-01
This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.
A high-order spatial filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-04-01
A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Ultrasound image filtering using the mutiplicative model
NASA Astrophysics Data System (ADS)
Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan
2002-04-01
Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.
Statistical Significance of Optical Map Alignments
Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.
2012-01-01
Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568
Adaptive Offset Correction for Intracortical Brain Computer Interfaces
Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.
2014-01-01
Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868
Adaptive offset correction for intracortical brain-computer interfaces.
Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R
2014-03-01
Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
Iterative Self-Dual Reconstruction on Radar Image Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizesmore » when applied to simulated and real SAR images in comparison with standard filters.« less
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
Optical filter having coupled whispering-gallery-mode resonators
NASA Technical Reports Server (NTRS)
Savchenkov, Anatoliy (Inventor); Ilchenko, Vladimir (Inventor); Maleki, Lutfollah (Inventor); Handley, Timothy A. (Inventor)
2006-01-01
Optical filters having at least two coupled whispering-gallery-mode (WGM) optical resonators to produce a second order or higher order filter function with a desired spectral profile. At least one of the coupled WGM optical resonators may be tunable by a control signal to adjust the filtering function.
Czaplewski, Raymond L.
2015-01-01
Wall-to-wall remotely sensed data are increasingly available to monitor landscape dynamics over large geographic areas. However, statistical monitoring programs that use post-stratification cannot fully utilize those sensor data. The Kalman filter (KF) is an alternative statistical estimator. I develop a new KF algorithm that is numerically robust with large numbers of study variables and auxiliary sensor variables. A National Forest Inventory (NFI) illustrates application within an official statistics program. Practical recommendations regarding remote sensing and statistical issues are offered. This algorithm has the potential to increase the value of synoptic sensor data for statistical monitoring of large geographic areas. PMID:26393588
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... Change Adding a New Rule To Adopt Price Protection Filters for Electronic Complex Orders October 11, 2013... a new rule to adopt price protection filters for Electronic Complex Orders. The text of the proposed... Commentary .04 [sic] governing price protections filters applicable to electronically entered Complex Orders...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... Adding a New Rule To Adopt Price Protection Filters for Electronic Complex Orders October 11, 2013... to adopt price protection filters for Electronic Complex Orders. The text of the proposed rule change... Commentary .05 governing price protections filters applicable to electronically entered Complex Orders.\\4\\ \\4...
Signal Processing Equipment and Techniques for Use in Measuring Ocean Acoustic Multipath Structures
1983-12-01
Demodulator 3.4 Digital Demodulator 3.4.1 Number of Bits in the Input A/D Converter Quantization Effects The Demodulator Output Filter Effects of... power caused by ignoring cross spectral term a) First order Butterworth filter b) Second order Butterworth filter 48 3.4 Ordering of e...spectrum 59 3.7 Multiplying D/A Converter input and output spectra a) Input b) Output 60 3.8 Demodulator output spectrum prior to filtering 63
Linear variable narrow bandpass optical filters in the far infrared (Conference Presentation)
NASA Astrophysics Data System (ADS)
Rahmlow, Thomas D.
2017-06-01
We are currently developing linear variable filters (LVF) with very high wavelength gradients. In the visible, these filters have a wavelength gradient of 50 to 100 nm/mm. In the infrared, the wavelength gradient covers the range of 500 to 900 microns/mm. Filter designs include band pass, long pass and ulta-high performance anti-reflection coatings. The active area of the filters is on the order of 5 to 30 mm along the wavelength gradient and up to 30 mm in the orthogonal, constant wavelength direction. Variation in performance along the constant direction is less than 1%. Repeatable performance from filter to filter, absolute placement of the filter relative to a substrate fiducial and, high in-band transmission across the full spectral band is demonstrated. Applications include order sorting filters, direct replacement of the spectrometer and hyper-spectral imaging. Off-band rejection with an optical density of greater than 3 allows use of the filter as an order sorting filter. The linear variable order sorting filters replaces other filter types such as block filters. The disadvantage of block filters is the loss of pixels due to the transition between filter blocks. The LVF is a continuous gradient without a discrete transition between filter wavelength regions. If the LVF is designed as a narrow band pass filter, it can be used in place of a spectrometer thus reducing overall sensor weight and cost while improving the robustness of the sensor. By controlling the orthogonal performance (smile) the LVF can be sized to the dimensions of the detector. When imaging on to a 2 dimensional array and operating the sensor in a push broom configuration, the LVF spectrometer performs as a hyper-spectral imager. This paper presents performance of LVF fabricated in the far infrared on substrates sized to available detectors. The impact of spot size, F-number and filter characterization are presented. Results are also compared to extended visible LVF filters.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Edge detection - Image-plane versus digital processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.
1987-01-01
To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.
Lisle, John T.; Hamilton, Martin A.; Willse, Alan R.; McFeters, Gordon A.
2004-01-01
Total direct counts of bacterial abundance are central in assessing the biomass and bacteriological quality of water in ecological and industrial applications. Several factors have been identified that contribute to the variability in bacterial abundance counts when using fluorescent microscopy, the most significant of which is retaining an adequate number of cells per filter to ensure an acceptable level of statistical confidence in the resulting data. Previous studies that have assessed the components of total-direct-count methods that contribute to this variance have attempted to maintain a bacterial cell abundance value per filter of approximately 106 cells filter-1. In this study we have established the lower limit for the number of bacterial cells per filter at which the statistical reliability of the abundance estimate is no longer acceptable. Our results indicate that when the numbers of bacterial cells per filter were progressively reduced below 105, the microscopic methods increasingly overestimated the true bacterial abundance (range, 15.0 to 99.3%). The solid-phase cytometer only slightly overestimated the true bacterial abundances and was more consistent over the same range of bacterial abundances per filter (range, 8.9 to 12.5%). The solid-phase cytometer method for conducting total direct counts of bacteria was less biased and performed significantly better than any of the microscope methods. It was also found that microscopic count data from counting 5 fields on three separate filters were statistically equivalent to data from counting 20 fields on a single filter.
A Nonlinear Interactions Approximation Model for Large-Eddy Simulation
NASA Astrophysics Data System (ADS)
Haliloglu, Mehmet U.; Akhavan, Rayhaneh
2003-11-01
A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.
Qian, Fuping; Wang, Haigang
2010-04-15
The gas-solid two-phase flows in the plain wave fabric filter were simulated by computational fluid dynamics (CFD) technology, and the warps and wefts of the fabric filter were made of filaments with different dimensions. The numerical solutions were carried out using commercial computational fluid dynamics (CFD) code Fluent 6.1. The filtration performances of the plain wave fabric filter with different geometry parameters and operating condition, including the horizontal distance, the vertical distance and the face velocity were calculated. The effects of geometry parameters and operating condition on filtration efficiency and pressure drop were studied using response surface methodology (RSM) by means of the statistical software (Minitab V14), and two second-order polynomial models were obtained with regard to the effect of the three factors as stated above. Moreover, the models were modified by dismissing the insignificant terms. The results show that the horizontal distance, vertical distance and the face velocity all play an important role in influencing the filtration efficiency and pressure drop of the plane wave fabric filters. The horizontal distance of 3.8 times the fiber diameter, the vertical distance of 4.0 times the fiber diameter and Reynolds number of 0.98 are found to be the optimal conditions to achieve the highest filtration efficiency at the same face velocity, while maintaining an acceptable pressure drop. 2009 Elsevier B.V. All rights reserved.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Effects of eye artifact removal methods on single trial P300 detection, a comparative study.
Ghaderi, Foad; Kim, Su Kyoung; Kirchner, Elsa Andrea
2014-01-15
Electroencephalographic signals are commonly contaminated by eye artifacts, even if recorded under controlled conditions. The objective of this work was to quantitatively compare standard artifact removal methods (regression, filtered regression, Infomax, and second order blind identification (SOBI)) and two artifact identification approaches for independent component analysis (ICA) methods, i.e. ADJUST and correlation. To this end, eye artifacts were removed and the cleaned datasets were used for single trial classification of P300 (a type of event related potentials elicited using the oddball paradigm). Statistical analysis of the results confirms that the combination of Infomax and ADJUST provides a relatively better performance (0.6% improvement on average of all subject) while the combination of SOBI and correlation performs the worst. Low-pass filtering the data at lower cutoffs (here 4 Hz) can also improve the classification accuracy. Without requiring any artifact reference channel, the combination of Infomax and ADJUST improves the classification performance more than the other methods for both examined filtering cutoffs, i.e., 4 Hz and 25 Hz. Copyright © 2013 Elsevier B.V. All rights reserved.
Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis
NASA Astrophysics Data System (ADS)
Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.
As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.
Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.
Vaccaro, Richard J; Zaki, Ahmed S
2017-02-11
A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.
Statistical coding and decoding of heartbeat intervals.
Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru
2011-01-01
The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2011-03-01
Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
Statistical analysis and digital processing of the Mössbauer spectra
NASA Astrophysics Data System (ADS)
Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri
2010-02-01
This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.
NASA Astrophysics Data System (ADS)
Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom
2015-02-01
Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.
Use of Whatman-41 filters in air quality sampling networks (with applications to elemental analysis)
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Sidik, S. M.; King, R. B.; Fordyce, J. S.; Burr, J. C.
1974-01-01
The operation of a 16-site parallel high volume air sampling network with glass fiber filters on one unit and Whatman-41 filters on the other is reported. The network data and data from several other experiments indicate that (1) Sampler-to-sampler and filter-to-filter variabilities are small; (2) hygroscopic affinity of Whatman-41 filters need not introduce errors; and (3) suspended particulate samples from glass fiber filters averaged slightly, but not statistically significantly, higher than from Whatman-41-filters. The results obtained demonstrate the practicability of Whatman-41 filters for air quality monitoring and elemental analysis.
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
NASA Astrophysics Data System (ADS)
Ushenko, Yu A.
2012-11-01
The complex technique of concerted polarization-phase and spatial-frequency filtering of blood plasma laser images is suggested. The possibility of obtaining the coordinate distributions of phases of linearly and circularly birefringent protein networks of blood plasma separately is presented. The statistical (moments of the first to fourth orders) and scale self-similar (logarithmic dependences of power spectra) structure of phase maps of different types of birefringence of blood plasma of two groups of patients-healthy people (donors) and those suffering from rectal cancer-is investigated. The diagnostically sensitive parameters of a pathological change of the birefringence of blood plasma polycrystalline networks are determined. The effectiveness of this technique for detecting change in birefringence in the smears of other biological fluids in diagnosing the appearance of cholelithiasis (bile), operative differentiation of the acute and gangrenous appendicitis (exudate), and differentiation of inflammatory diseases of joints (synovial fluid) is shown.
NASA Astrophysics Data System (ADS)
Drewery, J. O.; Storey, R.; Tanton, N. E.
1984-07-01
A video noise and film grain reducer is described which is based on a first-order recursive temporal filter. Filtering of moving detail is avoided by inhibiting recursion in response to the amount of motion in a picture. Motion detection is based on the point-by-point power of the picture difference signal coupled with a knowledge of the noise statistics. A control system measures the noise power and adjusts the working point of the motion detector accordingly. A field trial of a manual version of the equipment at Television Center indicated that a worthwhile improvement in the quality of noisy or grainy pictures received by the viewer could be obtained. Subsequent trials of the automated version confirmed that the improvement could be maintained. Commercial equipment based on the design is being manufactured and marketed by Pye T.V.T. under license. It is in regular use on both the BBC1 and BBC2 networks.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Texture analysis with statistical methods for wheat ear extraction
NASA Astrophysics Data System (ADS)
Bakhouche, M.; Cointault, F.; Gouton, P.
2007-01-01
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Orthonormal filters for identification in active control systems
NASA Astrophysics Data System (ADS)
Mayer, Dirk
2015-12-01
Many active noise and vibration control systems require models of the control paths. When the controlled system changes slightly over time, adaptive digital filters for the identification of the models are useful. This paper aims at the investigation of a special class of adaptive digital filters: orthonormal filter banks possess the robust and simple adaptation of the widely applied finite impulse response (FIR) filters, but at a lower model order, which is important when considering implementation on embedded systems. However, the filter banks require prior knowledge about the resonance frequencies and damping of the structure. This knowledge can be supposed to be of limited precision, since in many practical systems, uncertainties in the structural parameters exist. In this work, a procedure using a number of training systems to find the fixed parameters for the filter banks is applied. The effect of uncertainties in the prior knowledge on the model error is examined both with a basic example and in an experiment. Furthermore, the possibilities to compensate for the imprecise prior knowledge by a higher filter order are investigated. Also comparisons with FIR filters are implemented in order to assess the possible advantages of the orthonormal filter banks. Numerical and experimental investigations show that significantly lower computational effort can be reached by the filter banks under certain conditions.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
Respiratory-Induced Haemodynamic Changes: A Contributing Factor to IVC Filter Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laborda, Alicia, E-mail: alaborda@unizar.es; Kuo, William T., E-mail: wkuo@stanford.edu; Ioakeim, Ignatios, E-mail: ignacio.ioakim@hotmail.es
2015-10-15
PurposeThe purpose of the study is to evaluate the influence of respiratory-induced vena caval hemodynamic changes on filter migration/penetration.Materials and MethodsAfter placement of either a Gunther Tulip or Celect IVC filter, 101 consecutive patients scheduled for filter retrieval were prospectively enrolled in this study. Pre-retrieval CT scans were used to assess filter complications and to calculate cross-sectional area in three locations: at level of filter strut fixation, 3 cm above and 3 cm below. A 3D finite element simulation was constructed on these data and direct IVC pressure was recorded during filter retrieval. Cross-sectional areas and pressures of the vena cava weremore » measured during neutral breathing and in Valsalva maneuver and identified filter complications were recorded. A statistical analysis of these variables was then performed.ResultsDuring Valsalva maneuvers, a 60 % decrease of the IVC cross-sectional area and a fivefold increase in the IVC pressure were identified (p < 0.001). There was a statistically significant difference in the reduction of the cross-sectional area at the filter strut level (p < 0.001) in patient with filter penetration. Difficulty in filter retrieval was higher in penetrated or tilted filters (p < 0.001; p = 0.005). 3D computational models showed significant IVC deformation around the filter during Valsalva maneuver.ConclusionCaval morphology and hemodynamics are clearly affected by Valsalva maneuvers. A physiological reduction of IVC cross-sectional area is associated with higher risk of filter penetration, despite short dwell times. Physiologic data should be used to improve future filter designs to remain safely implanted over longer dwell times.« less
Betty Petersen Memorial Library - NCWCP Publications - NWS
Filters to Variational Statistical Analysis with Spatially Inhomogeneous Covariances (.PDF file) 432 2001 file) 456 2008 Purser, R. James Normalization Of The Diffusive Filters That Represent The Inhomogeneous file) 457 2008 Purser, R. James Normalization Of The Diffusive Filters That Represent The Inhomogeneous
An Integrated approach to the Space Situational Awareness Problem
2016-12-15
data coming from the sensors. We developed particle-based Gaussian Mixture Filters that are immune to the “curse of dimensionality”/ “particle...depletion” problem inherent in particle filtering . This method maps the data assimilation/ filtering problem into an unsupervised learning problem. Results...Gaussian Mixture Filters ; particle depletion; Finite Set Statistics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 1
Filter for third order phase locked loops
NASA Technical Reports Server (NTRS)
Crow, R. B.; Tausworthe, R. C. (Inventor)
1973-01-01
Filters for third-order phase-locked loops are used in receivers to acquire and track carrier signals, particularly signals subject to high doppler-rate changes in frequency. A loop filter with an open-loop transfer function and set of loop constants, setting the damping factor equal to unity are provided.
A simple new filter for nonlinear high-dimensional data assimilation
NASA Astrophysics Data System (ADS)
Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo
2015-04-01
The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.
The elimination of zero-order diffraction of 10.6 μm infrared digital holography
NASA Astrophysics Data System (ADS)
Liu, Ning; Yang, Chao
2017-05-01
A new method of eliminating the zero-order diffraction in infrared digital holography has been raised in this paper. Usually in the reconstruction of digital holography, the spatial frequency of the infrared thermal imager, such as microbolometer, cannot be compared to the common visible CCD or CMOS devices. The infrared imager suffers the problems of large pixel size and low spatial resolution, which cause the zero-order diffraction a severe influence of the reconstruction process of digital holograms. The zero-order diffraction has very large energy and occupies the central region in the spectrum domain. In this paper, we design a new filtering strategy to overcome this problem. This filtering strategy contains two kinds of filtering process which are the Gaussian low-frequency filter and the high-pass phase averaging filter. With the correct set of the calculating parameters, these filtering strategies can work effectively on the holograms and fully eliminate the zero-order diffraction, as well as the two crossover bars shown in the spectrum domain. Detailed explanation and discussion about the new method have been proposed in this paper, and the experiment results are also demonstrated to prove the performance of this method.
A Joint Optimization Criterion for Blind DS-CDMA Detection
NASA Astrophysics Data System (ADS)
Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.
2006-12-01
This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.
Alić, Nikola; Papen, George; Saperstein, Robert; Milstein, Laurence; Fainman, Yeshaiahu
2005-06-13
Exact signal statistics for fiber-optic links containing a single optical pre-amplifier are calculated and applied to sequence estimation for electronic dispersion compensation. The performance is evaluated and compared with results based on the approximate chi-square statistics. We show that detection in existing systems based on exact statistics can be improved relative to using a chi-square distribution for realistic filter shapes. In contrast, for high-spectral efficiency systems the difference between the two approaches diminishes, and performance tends to be less dependent on the exact shape of the filter used.
Assimilating NOAA SST data into BSH operational circulation model for North and Baltic Seas
NASA Astrophysics Data System (ADS)
Losa, Svetlana; Schroeter, Jens; Nerger, Lars; Janjic, Tijana; Danilov, Sergey; Janssen, Frank
A data assimilation (DA) system is developed for BSH operational circulation model in order to improve forecast of current velocities, sea surface height, temperature and salinity in the North and Baltic Seas. Assimilated data are NOAA sea surface temperature (SST) data for the following period: 01.10.07 -30.09.08. All data assimilation experiments are based on im-plementation of one of the so-called statistical DA methods -Singular Evolutive Interpolated Kalman (SEIK) filter, -with different ways of prescribing assumed model and data errors statis-tics. Results of the experiments will be shown and compared against each other. Hydrographic data from MARNET stations and sea level at series of tide gauges are used as independent information to validate the data assimilation system. Keywords: Operational Oceanography and forecasting
Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes
NASA Astrophysics Data System (ADS)
Ross, Brian; Imada, Janine
Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.
Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
NASA Astrophysics Data System (ADS)
Fangxiong, Chen; Min, Lin; Heping, Ma; Hailong, Jia; Yin, Shi; Forster, Dai
2009-08-01
An asymmetric MOSFET-C band-pass filter (BPF) with on chip charge pump auto-tuning is presented. It is implemented in UMC (United Manufacturing Corporation) 0.18 μm CMOS process technology. The filter system with auto-tuning uses a master-slave technique for continuous tuning in which the charge pump outputs 2.663 V, much higher than the power supply voltage, to improve the linearity of the filter. The main filter with third order low-pass and second order high-pass properties is an asymmetric band-pass filter with bandwidth of 2.730-5.340 MHz. The in-band third order harmonic input intercept point (IIP3) is 16.621 dBm, with 50 Ω as the source impedance. The input referred noise is about 47.455 μVrms. The main filter dissipates 3.528 mW while the auto-tuning system dissipates 2.412 mW from a 1.8 V power supply. The filter with the auto-tuning system occupies 0.592 mm2 and it can be utilized in GPS (global positioning system) and Bluetooth systems.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
NASA Technical Reports Server (NTRS)
Pan, Jianqiang
1992-01-01
Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.
Real-time vehicle noise cancellation techniques for gunshot acoustics
NASA Astrophysics Data System (ADS)
Ramos, Antonio L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald
2012-06-01
Acoustical sniper positioning systems rely on the detection and direction-of-arrival (DOA) estimation of the shockwave and the muzzle blast in order to provide an estimate of a potential snipers location. Field tests have shown that detecting and estimating the DOA of the muzzle blast is a rather difficult task in the presence of background noise sources, e.g., vehicle noise, especially in long range detection and absorbing terrains. In our previous work presented in the 2011 edition of this conference we highlight the importance of improving the SNR of the gunshot signals prior to the detection and recognition stages, aiming at lowering the false alarm and miss-detection rates and, thereby, increasing the reliability of the system. This paper reports on real-time noise cancellation techniques, like Spectral Subtraction and Adaptive Filtering, applied to gunshot signals. Our model assumes the background noise as being short-time stationary and uncorrelated to the impulsive gunshot signals. In practice, relatively long periods without signal occur and can be used to estimate the noise spectrum and its first and second order statistics as required in the spectral subtraction and adaptive filtering techniques, respectively. The results presented in this work are supported with extensive simulations based on real data.
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-01-01
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-12-19
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
cyclostratigraphy, sequence stratigraphy and organic matter accumulation mechanism
NASA Astrophysics Data System (ADS)
Cong, F.; Li, J.
2016-12-01
The first member of Maokou Formation of Sichuan basin is composed of well preserved carbonate ramp couplets of limestone and marlstone/shale. It acts as one of the potential shale gas source rock, and is suitable for time-series analysis. We conducted time-series analysis to identify high-frequency sequences, reconstruct high-resolution sedimentation rate, estimate detailed primary productivity for the first time in the study intervals and discuss organic matter accumulation mechanism of source rock under sequence stratigraphic framework.Using the theory of cyclostratigraphy and sequence stratigraphy, the high-frequency sequences of one outcrop profile and one drilling well are identified. Two third-order sequences and eight fourth-order sequences are distinguished on outcrop profile based on the cycle stacking patterns. For drilling well, sequence boundary and four system tracts is distinguished by "integrated prediction error filter analysis" (INPEFA) of Gamma-ray logging data, and eight fourth-order sequences is identified by 405ka long eccentricity curve in depth domain which is quantified and filtered by integrated analysis of MTM spectral analysis, evolutive harmonic analysis (EHA), evolutive average spectral misfit (eASM) and band-pass filtering. It suggests that high-frequency sequences correlate well with Milankovitch orbital signals recorded in sediments, and it is applicable to use cyclostratigraphy theory in dividing high-frequency(4-6 orders) sequence stratigraphy.High-resolution sedimentation rate is reconstructed through the study interval by tracking the highly statistically significant short eccentricity component (123ka) revealed by EHA. Based on sedimentation rate, measured TOC and density data, the burial flux, delivery flux and primary productivity of organic carbon was estimated. By integrating redox proxies, we can discuss the controls on organic matter accumulation by primary production and preservation under the high-resolution sequence stratigraphic framework. Results show that high average organic carbon contents in the study interval are mainly attributed to high primary production. The results also show a good correlation between high organic carbon accumulation and intervals of transgression.
Combline designs improve mm-wave filter performance
NASA Astrophysics Data System (ADS)
Hey-Shipton, Gregory L.
1990-10-01
Combline filters with 2- to 75-percent bandwidths and orders up to 19 are discussed. They are realized as coupled rectangular coaxial transmission lines, since this type of transmission line is characterized by machinability and the wide variation in coupling coefficients that can be realized with rectangular bars. A broadband combline filter designed as a 19th-order, 0.01-dB equal-ripple Chebyshev type is presented, along with a third-order 0.001-dB equal-ripple Chebyshev filter with a 200-MHz bandwidth centered at 8.0 GHz. Interfaces to standard 50-ohm coaxial lines, as well as structures for waveguide interfaces are described, and focus is placed on a two-step impedance transformer matching a 538-ohm waveguide characteristic impedance to a 95-ohm filter terminal impedance.
Belavkin filter for mixture of quadrature and photon counting process with some control techniques
NASA Astrophysics Data System (ADS)
Garg, Naman; Parthasarathy, Harish; Upadhyay, D. K.
2018-03-01
The Belavkin filter for the H-P Schrödinger equation is derived when the measurement process consists of a mixture of quantum Brownian motions and conservation/Poisson process. Higher-order powers of the measurement noise differentials appear in the Belavkin dynamics. For simulation, we use a second-order truncation. Control of the Belavkin filtered state by infinitesimal unitary operators is achieved in order to reduce the noise effects in the Belavkin filter equation. This is carried out along the lines of Luc Bouten. Various optimization criteria for control are described like state tracking and Lindblad noise removal.
The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo
2018-05-01
The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.
NASA Astrophysics Data System (ADS)
Tao, Tong; Baoyong, Chi; Ziqiang, Wang; Ying, Zhang; Hanjun, Jiang; Zhihua, Wang
2010-05-01
A reconfigurable analog baseband circuit for WLAN, WCDMA, and Bluetooth in 0.35 μm CMOS is presented. The circuit consists of two variable gain amplifiers (VGA) in cascade and a Gm-C elliptic low-pass filter (LPF). The filter-order and the cut-off frequency of the LPF can be reconfigured to satisfy the requirements of various applications. In order to achieve the optimum power consumption, the bandwidth of the VGAs can also be dynamically reconfigured and some Gm cells can be cut off in the given application. Simulation results show that the analog baseband circuit consumes 16.8 mW for WLAN, 8.9 mW for WCDMA and only 6.5 mW for Bluetooth, all with a 3 V power supply. The analog baseband circuit could provide -10 to +40 dB variable gain, third-order low pass filtering with 1 MHz cut-off frequency for Bluetooth, fourth-order low pass filtering with 2.2 MHz cut-off frequency for WCDMA, and fifth-order low pass filtering with 11 MHz cut-off frequency for WLAN, respectively.
Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2004-01-01
Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
NASA Astrophysics Data System (ADS)
Torabi, H.; Pariz, N.; Karimpour, A.
2016-02-01
This paper investigates fractional Kalman filters when time-delay is entered in the observation signal in the discrete-time stochastic fractional order state-space representation. After investigating the common fractional Kalman filter, we try to derive a fractional Kalman filter for time-delay fractional systems. A detailed derivation is given. Fractional Kalman filters will be used to estimate recursively the states of fractional order state-space systems based on minimizing the cost function when there is a constant time delay (d) in the observation signal. The problem will be solved by converting the filtering problem to a usual d-step prediction problem for delay-free fractional systems.
Speckle noise reduction of 1-look SAR imagery
NASA Technical Reports Server (NTRS)
Nathan, Krishna S.; Curlander, John C.
1987-01-01
Speckle noise is inherent to synthetic aperture radar (SAR) imagery. Since the degradation of the image due to this noise results in uncertainties in the interpretation of the scene and in a loss of apparent resolution, it is desirable to filter the image to reduce this noise. In this paper, an adaptive algorithm based on the calculation of the local statistics around a pixel is applied to 1-look SAR imagery. The filter adapts to the nonstationarity of the image statistics since the size of the blocks is very small compared to that of the image. The performance of the filter is measured in terms of the equivalent number of looks (ENL) of the filtered image and the resulting resolution degradation. The results are compared to those obtained from different techniques applied to similar data. The local adaptive filter (LAF) significantly increases the ENL of the final image. The associated loss of resolution is also lower than that for other commonly used speckle reduction techniques.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images
NASA Astrophysics Data System (ADS)
Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan
2017-08-01
Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
SAR Speckle Noise Reduction Using Wiener Filter
NASA Technical Reports Server (NTRS)
Joo, T. H.; Held, D. N.
1983-01-01
Synthetic aperture radar (SAR) images are degraded by speckle. A multiplicative speckle noise model for SAR images is presented. Using this model, a Wiener filter is derived by minimizing the mean-squared error using the known speckle statistics. Implementation of the Wiener filter is discussed and experimental results are presented. Finally, possible improvements to this method are explored.
2018-01-01
collected data. These statistical techniques are under the area of descriptive statistics, which is a methodology to condense the data in quantitative ...ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...report when it is no longer needed. Do not return it to the originator. ARL-TR-8270 ● JAN 2017 US Army Research Laboratory An
NASA Astrophysics Data System (ADS)
Zeng, Bangze; Zhu, Youpan; Li, Zemin; Hu, Dechao; Luo, Lin; Zhao, Deli; Huang, Juan
2014-11-01
Duo to infrared image with low contrast, big noise and unclear visual effect, target is very difficult to observed and identified. This paper presents an improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering (AHSS-GF). Based on the fact that the human eyes are very sensitive to the edges and lines, the author proposed to extract the details and textures by using the gradient filtering. New histogram could be acquired by calculating the sum of original histogram based on fixed window. With the minimum value for cut-off point, author carried on histogram statistical stretching. After the proper weights given to the details and background, the detail-enhanced results could be acquired finally. The results indicate image contrast could be improved and the details and textures could be enhanced effectively as well.
Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task
NASA Technical Reports Server (NTRS)
Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.
1978-01-01
Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.
Theory of Alike Selectivity in Biological Channels
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Gibby, Will A. T.; Kaufman, Igor Kh.; Eisenberg, Robert S.; McClintock, Peter V. E.
2016-01-01
We introduce a statistical mechanical model of the selectivity filter that accounts for the interaction between ions within the channel and derive Eisenman equation of the filter selectivity directly from the condition of barrier-less conduction.
Kidney Disease Statistics for the United States
... that a person’s kidneys are damaged and cannot filter blood the way they should. This damage can ... per 1.73 m 2 ). Dialysis: Treatment to filter wastes and water from the blood. When their ...
NASA Astrophysics Data System (ADS)
Piretzidis, D.; Sra, G.; Sideris, M. G.
2016-12-01
This study explores new methods for identifying correlation errors in harmonic coefficients derived from monthly solutions of the Gravity Recovery and Climate Experiment (GRACE) satellite mission using pattern recognition and neural network algorithms. These correlation errors are evidenced in the differences between monthly solutions and can be suppressed using a de-correlation filter. In all studies so far, the implementation of the de-correlation filter starts from a specific minimum order (i.e., 11 for RL04 and 38 for RL05) until the maximum order of the monthly solution examined. This implementation method has two disadvantages, namely, the omission of filtering correlated coefficients of order less than the minimum order and the filtering of uncorrelated coefficients of order higher than the minimum order. In the first case, the filtered solution is not completely free of correlated errors, whereas the second case results in a monthly solution that suffers from loss of geophysical signal. In the present study, a new method of implementing the de-correlation filter is suggested, by identifying and filtering only the coefficients that show indications of high correlation. Several numerical and geometric properties of the harmonic coefficient series of all orders are examined. Extreme cases of both correlated and uncorrelated coefficients are selected, and their corresponding properties are used to train a two-layer feed-forward neural network. The objective of the neural network is to identify and quantify the correlation by providing the probability of an order of coefficients to be correlated. Results show good performance of the neural network, both in the validation stage of the training procedure and in the subsequent use of the trained network to classify independent coefficients. The neural network is also capable of identifying correlated coefficients even when a small number of training samples and neurons are used (e.g.,100 and 10, respectively).
Unenhanced third-generation dual-source chest CT using a tin filter for spectral shaping at 100kVp.
Haubenreisser, Holger; Meyer, Mathias; Sudarski, Sonja; Allmendinger, Thomas; Schoenberg, Stefan O; Henzler, Thomas
2015-08-01
To prospectively investigate image quality and radiation dose of 100kVp spectral shaping chest CT using a dedicated tin filter on a 3rd generation dual-source CT (DSCT) in comparison to standard 100kVp chest CT. Sixty patients referred for a non-contrast chest on a 3rd generation DSCT were prospectively included and examined at 100kVp with a dedicated tin filter. These patients were retrospectively matched with patients that were examined on a 2nd generation DSCT at 100kVp without tin filter. Objective and subjective image quality was assessed in various anatomic regions and radiation dose was compared. Radiation dose was decreased by 90% using the tin filter (3.0 vs 0.32mSv). Soft tissue attenuation and image noise was not statistically different for both examination techniques (p>0.05), however image noise was found to be significantly higher in the trachea when using the additional tin filter (p=0.002). SNR was found to be statistically similar in pulmonary tissue, significantly lower when measured in air and significantly higher in the aorta for the scans on the 3rd generation DSCT. Subjective image quality with regard to overall quality and image noise and sharpness was not statistically significantly different (p>0.05). 100kVp spectral shaping chest CT by means of a tube-based tin-filter on a 3rd generation DSCT allows 90% dose reduction when compared to 100kVp chest CT on a 2nd generation DSCT without spectral shaping. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Online Denoising Based on the Second-Order Adaptive Statistics Model.
Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei
2017-07-20
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.
Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis
Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq
2015-01-01
Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882
Bayesian analyses of seasonal runoff forecasts
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.; Reese, S.
1991-12-01
Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.
NASA Astrophysics Data System (ADS)
GE, J.; Dong, H.; Liu, H.; Luo, W.
2016-12-01
In the extreme sea conditions and deep-sea detection, the towed Overhauser marine magnetic sensor is easily affected by the magnetic noise associated with ocean waves. We demonstrate the reduction of the magnetic noise by Sage-Husa adaptive Kalman filter. Based on Weaver's model, we analyze the induced magnetic field variations associated with the different ocean depths, wave periods and amplitudes in details. Furthermore, we take advantage of the classic Kalman filter to reduce the magnetic noise and improve the signal to noise ratio of the magnetic anomaly data. In the practical marine magnetic surveys, the extreme sea conditions can change priori statistics of the noise, and may decrease the effect of Kalman filtering estimation. To solve this problem, an improved Sage-Husa adaptive filtering algorithm is used to reduce the dependence on the prior statistics. In addition, we implement a towed Overhauser marine magnetometer (Figure 1) to test the proposed method, and it consists of a towfish, an Overhauser total field sensor, a console, and other condition monitoring sensors. Over all, the comparisons of simulation experiments with and without the filter show that the power spectral density of the magnetic noise is reduced to 0.1 nT/Hz1/2@1Hz from 1 nT/Hz1/2@1Hz. The contrasts between the Sage-Husa filter and the classic Kalman filter (Figure 2) show the filtering accuracy and adaptive capacity are improved.
Spatial mode filters realized with multimode interference couplers
NASA Astrophysics Data System (ADS)
Leuthold, J.; Hess, R.; Eckner, J.; Besse, P. A.; Melchior, H.
1996-06-01
Spatial mode filters based on multimode interference couplers (MMI's) that offer the possibility of splitting off antisymmetric from symmetric modes are presented, and realizations of these filters in InGaAsP / InP are demonstrated. Measured suppression of the antisymmetric first-order modes at the output for the symmetric mode is better than 18 dB. Such MMI's are useful for monolithically integrating mode filters with all-optical devices, which are controlled through an antisymmetric first-order mode. The filtering out of optical control signals is necessary for cascading all-optical devices. Another application is the improvement of on-off ratios in optical switches.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Jin, Chao; Lee, Jay
2017-01-01
Minimum Entropy Deconvolution (MED) filter, which is a non-parametric approach for impulsive signature detection, has been widely studied recently. Although the merits of the MED filter are manifold, this method tends to over highlight the dominant peaks and its performance becomes less stable when strong noise exists. In order to better understand the behavior of the MED filter, this study first investigated the mathematical fundamentals of the MED filter and then explained the reason why the MED filter tends to over highlight the dominant peaks. In order to pursue finer solutions for weak impulsive signature enhancement, the Convolutional Sparse Filter (CSF) is originally proposed in this work and the derivation of the CSF is presented in details. The superiority of the proposed CSF over the MED filter is validated by both simulated data and experimental data. The results demonstrate that CSF is an effective method for impulsive signature enhancement that could be applied in rotating machines for incipient fault detection.
Herbst, Daniel P
2014-09-01
Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient's systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26-33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique.
Herbst, Daniel P.
2014-01-01
Abstract: Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient’s systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26–33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique. PMID:26357790
Angland, P.; Haberberger, D.; Ivancic, S. T.; ...
2017-10-30
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angland, P.; Haberberger, D.; Ivancic, S. T.
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter
2008-05-20
Machine learning techniques are often used for decision making in security critical applications such as intrusion detection and spam filtering...filter. The defenses shown in this thesis are able to work against the attacks developed against SpamBayes and are sufficiently generic to be easily extended into other statistical machine learning algorithms.
Kalman filter for statistical monitoring of forest cover across sub-continental regions
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a multivariate generalization of the composite estimator which recursively combines a current direct estimate with a past estimate that is updated for expected change over time with a prediction model. The Kalman filter can estimate proportions of different cover types for sub-continental regions each year. A random sample of high-resolution...
Statistical, Graphical, and Learning Methods for Sensing, Surveillance, and Navigation Systems
2016-06-28
harsh propagation environments. Conventional filtering techniques fail to provide satisfactory performance in many important nonlinear or non...Gaussian scenarios. In addition, there is a lack of a unified methodology for the design and analysis of different filtering techniques. To address...these problems, we have proposed a new filtering methodology called belief condensation (BC) DISTRIBUTION A: Distribution approved for public release
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
NASA Astrophysics Data System (ADS)
Elamien, Mohamed B.; Mahmoud, Soliman A.
2018-03-01
In this paper, a third-order elliptic lowpass filter is designed using highly linear digital programmable balanced OTA. The filter exhibits a cutoff frequency tuning range from 2.2 MHz to 7.1 MHz, thus, it covers W-CDMA, UMTS, and DVB-H standards. The programmability concept in the filter is achieved by using digitally programmable operational transconductors amplifier (DPOTA). The DPOTA employs three linearization techniques which are the source degeneration, double differential pair and the adaptive biasing. Two current division networks (CDNs) are used to control the value of the transconductance. For the DPOTA, the third-order harmonic distortion (HD3) remains below -65 dB up to 0.4 V differential input voltage at 1.2 V supply voltage. The DPOTA and the filter are designed and simulated in 90 nm CMOS technology with LTspice simulator.
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E; Chacron, Maurice J
2017-01-01
There is accumulating evidence that the brain's neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals.
Vila, Natalia; Siblini, Aya; Esposito, Evangelina; Bravo-Filho, Vasco; Zoroquiain, Pablo; Aldrees, Sultan; Logan, Patrick; Arias, Lluis; Burnier, Miguel N
2017-11-02
Light exposure and more specifically the spectrum of blue light contribute to the oxidative stress in Age-related macular degeneration (AMD). The purpose of the study was to establish whether blue light filtering could modify proangiogenic signaling produced by retinal pigmented epithelial (RPE) cells under different conditions simulating risk factors for AMD. Three experiments were carried out in order to expose ARPE-19 cells to white light for 48 h with and without blue light-blocking filters (BLF) in different conditions. In each experiment one group was exposed to light with no BLF protection, a second group was exposed to light with BLF protection, and a control group was not exposed to light. The ARPE-19 cells used in each experiment prior to light exposure were cultured for 24 h as follows: Experiment 1) Normoxia, Experiment 2) Hypoxia, and Experiment 3) Lutein supplemented media in normoxia. The media of all groups was harvested after light exposure for sandwich ELISA-based assays to quantify 10 pro-angiogenic cytokines. A significant decrease in angiogenin secretion levels and a significant increase in bFGF were observed following light exposure, compared to dark conditions, in both normoxia and hypoxia conditions. With the addition of a blue light-blocking filter in normoxia, a significant increase in angiogenin levels was observed. Although statistical significance was not achieved, blue light filters reduce light-induced secretion of bFGF and VEGF to near normal levels. This trend is also observed when ARPE-19 cells are grown under hypoxic conditions and when pre-treated with lutein prior to exposure to experimental conditions. Following light exposure, there is a decrease in angiogenin secretion by ARPE-19 cells, which was abrogated with a blue light - blocking filter. Our findings support the position that blue light filtering affects the secretion of angiogenic factors by retinal pigmented epithelial cells under normoxic, hypoxic, and lutein-pretreated conditions in a similar manner.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Identification of a Class of Filtered Poisson Processes.
1981-01-01
LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown
Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system
NASA Astrophysics Data System (ADS)
Wang, Beilei; Barner, Kenneth; Lee, Denny
2005-04-01
Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.
CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.
2009-12-01
We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.
The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.
Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.
Cigarette characteristic and emission variations across high-, middle- and low-income countries.
O'Connor, R J; Wilkins, K J; Caruso, R V; Cummings, K M; Kozlowski, L T
2010-12-01
The public health burden of tobacco use is shifting to the developing world, and the tobacco industry may apply some of its successful marketing tactics, such as allaying health concerns with product modifications. This study used standard smoking machine tests to examine the extent to which the industry is introducing engineering features that reduce tar and nicotine to cigarettes sold in middle- and low-income countries. Multicountry observational study. Cigarettes from 10 different countries were purchased in 2005 and 2007 with low-, middle- and high-income countries identified using the World Bank's per capita gross national income metric. Physical measurements of each brand were tested, and tobacco moisture and weight, paper porosity, filter ventilation and pressure drop were analysed. Tar, nicotine and carbon monoxide emission levels were determined for each brand using International Organization for Standardization and Canadian Intensive methods. Statistical analyses were performed using Statistical Package for the Social Sciences. Among cigarette brands with filters, more brands were ventilated in high-income countries compared with middle- and low-income countries [χ(2)(4)=25.92, P<0.001]. Low-income brands differed from high- and middle-income brands in engineering features such as filter density, ventilation and paper porosity, while tobacco weight and density measures separated the middle- and high-income groups. Smoke emissions differed across income groups, but these differences were largely negated when one accounted for design features. This study showed that as a country's income level increases, cigarettes become more highly engineered and the emissions levels decrease. In order to reduce the burden of tobacco-related disease and further effective product regulation, health officials must understand cigarette design and function within and between countries. Copyright © 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Cigarette characteristic and emission variations across high-, middle- and low-income countries
O’Connor, R.J.; Wilkins, K.J.; Caruso, R.V.; Cummings, K.M.; Kozlowski, L.T.
2010-01-01
SUMMARY Objectives The public health burden of tobacco use is shifting to the developing world, and the tobacco industry may apply some of its successful marketing tactics, such as allaying health concerns with product modifications. This study used standard smoking machine tests to examine the extent to which the industry is introducing engineering features that reduce tar and nicotine to cigarettes sold in middle- and low-income countries. Study design Multicountry observational study. Methods Cigarettes from 10 different countries were purchased in 2005 and 2007 with low-, middle- and high-income countries identified using the World Bank’s per-capita gross national income metric. Physical measurements of each brand were tested, and tobacco moisture and weight, paper porosity, filter ventilation and pressure drop were analysed. Tar, nicotine and carbon monoxide emission levels were determined for each brand using International Organization for Standardization and Canadian Intensive methods. Statistical analyses were performed using Statistical Package for the Social Sciences. Results Among cigarette brands with filters, more brands were ventilated in high-income countries compared with middle- and low-income countries [χ2(4)=25.92, P<0.001]. Low-income brands differed from high- and middle-income brands in engineering features such as filter density, ventilation and paper porosity, while tobacco weight and density measures separated the middle- and high-income groups. Smoke emissions differed across income groups, but these differences were largely negated when one accounted for design features. Conclusions This study showed that as a country’s income level increases, cigarettes become more highly engineered and the emissions levels decrease. In order to reduce the burden of tobacco-related disease and further effective product regulation, health officials must understand cigarette design and function within and between countries. PMID:21030055
Three-dimensional seismic depth migration
NASA Astrophysics Data System (ADS)
Zhou, Hongbo
1998-12-01
One-pass 3-D modeling and migration for poststack seismic data may be implemented by replacing the traditional 45sp° one-way wave equation (a third-order partial differential equation) with a pair of second and first order partial differential equations. Except for an extra correction term, the resulting second order equation has a form similar to Claerbout's 15sp° one-way wave equation, which is known to have a nearly circular horizontal impulse response. In this approach, there is no need to compensate for splitting errors. Numerical tests on synthetic data show that this algorithm has the desirable attributes of being second-order in accuracy and economical to solve. A modification of the Crank-Nicholson implementation maintains stability. Absorbing boundary conditions play an important role in one-way wave extrapolations by reducing reflections at grid edges. Clayton and Engquist's 2-D absorbing boundary conditions for one-way wave extrapolation by depth-stepping in the frequency domain are extended to 3-D using paraxial approximations of the scalar wave equation. Internal consistency is retained by incorporating the interior extrapolation equation with the absorbing boundary conditions. Numerical schemes are designed to make the proposed absorbing boundary conditions both mathematically correct and efficient with negligible extra cost. Synthetic examples illustrate the effectiveness of the algorithm for extrapolation with the 3-D 45sp° one-way wave equation. Frequency-space domain Butterworth and Chebyshev dip filters are implemented. By regrouping the product terms in the filter transfer function into summations, a cascaded (serial) Butterworth dip filter can be made parallel. A parallel Chebyshev dip filter can be similarly obtained, and has the same form as the Butterworth filter; but has different coeffcients. One of the advantages of the Chebyshev filter is that it has a sharper transition zone than that of Butterworth filter of the same order. Both filters are incorporated into 3-D one-way frequency-space depth migration for evanescent energy removal and for phase compensation of splitting errors; a single filter achieves both goals. Synthetic examples illustrate the behavior of the parallel filters. For a given order of filter, the cost of the Butterworth and Chebyshev filters is the same. A Chebyshev filter is more effective for phase compensation than the Butterworth filter of the same order, at the expense of some wavenumber-dependent amplitude ripples. An analytical formula for geometrical spreading is derived for a horizontally layered transversely isotropic medium with a vertical symmetry axis. Under this expression, geometrical spreading can be determined only by the anisotropic parameters in the first layer, the traveltime derivatives, and source-receiver offset. An explicit, numerically feasible expression for geometrical spreading can be further obtained by considering some of the special cases of transverse isotropy, such as weak anisotropy or elliptic anisotropy. Therefore, with the techniques of non-hyerbolic moveout for transverse isotropic media, geometrical spreading can be calculated by using picked traveltimes of primary P-wave reflections without having to know the actual parameters in the deeper subsurface; no ray tracing is needed. Synthetic examples verify the algorithm and show that it is numerically feasible for calculation of geometrical spreading.
NASA Technical Reports Server (NTRS)
Barry, R. K.; Satyapal, S.; Greenhouse, M. A.; Barclay, R.; Amato, D.; Arritt, B.; Brown, G.; Harvey, V.; Holt, C.; Kuhn, J.
2000-01-01
We discuss work in progress on a near-infrared tunable bandpass filter for the Goddard baseline wide field camera concept of the Next Generation Space Telescope (NGST) Integrated Science Instrument Module (ISIM). This filter, the Demonstration Unit for Low Order Cryogenic Etalon (DULCE), is designed to demonstrate a high efficiency scanning Fabry-Perot etalon operating in interference orders 1 - 4 at 30K with a high stability DSP based servo control system. DULCE is currently the only available tunable filter for lower order cryogenic operation in the near infrared. In this application, scanning etalons will illuminate the focal plane arrays with a single order of interference to enable wide field lower resolution hyperspectral imaging over a wide range of redshifts. We discuss why tunable filters are an important instrument component in future space-based observatories.
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
DOA-informed source extraction in the presence of competing talkers and background noise
NASA Astrophysics Data System (ADS)
Taseska, Maja; Habets, Emanuël A. P.
2017-12-01
A desired speech signal in hands-free communication systems is often degraded by noise and interfering speech. Even though the number and locations of the interferers are often unknown in practice, it is justified to assume in certain applications that the direction-of-arrival (DOA) of the desired source is approximately known. Using the known DOA, fixed spatial filters such as the delay-and-sum beamformer can be steered to extract the desired source. However, it is well-known that fixed data-independent spatial filters do not provide sufficient reduction of directional interferers. Instead, the DOA information can be used to estimate the statistics of the desired and the undesired signals and to compute optimal data-dependent spatial filters. One way the DOA is exploited for optimal spatial filtering in the literature, is by designing DOA-based narrowband detectors to determine whether a desired or an undesired signal is dominant at each time-frequency (TF) bin. Subsequently, the statistics of the desired and the undesired signals can be estimated during the TF bins where the respective signal is dominant. In a similar manner, a Gaussian signal model-based detector which does not incorporate DOA information has been used in scenarios where the undesired signal consists of stationary background noise. However, when the undesired signal is non-stationary, resulting for example from interfering speakers, such a Gaussian signal model-based detector is unable to robustly distinguish desired from undesired speech. To this end, we propose a DOA model-based detector to determine the dominant source at each TF bin and estimate the desired and undesired signal statistics. We demonstrate that data-dependent spatial filters that use the statistics estimated by the proposed framework achieve very good undesired signal reduction, even when using only three microphones.
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Kalnay, E.; Navon, I. M.
1985-01-01
A normal modes expansion technique is applied to perform high latitude filtering in the GLAS fourth order global shallow water model with orography. The maximum permissible time step in the solution code is controlled by the frequency of the fastest propagating mode, which can be a gravity wave. Numerical methods are defined for filtering the data to identify the number of gravity modes to be included in the computations in order to obtain the appropriate zonal wavenumbers. The performances of the model with and without the filter, and with a time tendency and a prognostic field filter are tested with simulations of the Northern Hemisphere winter. The normal modes expansion technique is shown to leave the Rossby modes intact and permit 3-5 day predictions, a range not possible with the other high-latitude filters.
Discrete filtering techniques applied to sequential GPS range measurements
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1987-01-01
The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.
NASA Technical Reports Server (NTRS)
1981-01-01
The application of statistical methods to recorded ozone measurements. The effects of a long term depletion of ozone at magnitudes predicted by the NAS is harmful to most forms of life. Empirical prewhitening filters the derivation of which is independent of the underlying physical mechanisms were analyzed. Statistical analysis performs a checks and balances effort. Time series filters variations into systematic and random parts, errors are uncorrelated, and significant phase lag dependencies are identified. The use of time series modeling to enhance the capability of detecting trends is discussed.
2016-09-08
Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM
2017-01-09
2017 Distribution A – Approved for public release; Distribution Unlimited. PA Clearance 17030 Introduction • Filtering schemes offer a less...dissipative alternative to the standard artificial dissipation operators when applied to high- order spatial/temporal schemes • Limiting Fact: Filters impart...systems require a preconditioned dual-time framework to be solved efficiently • Limiting Fact: Filtering cannot be applied only at the physical- time
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Spectroscopic imaging using acousto-optic tunable filters
NASA Astrophysics Data System (ADS)
Bouhifd, Mounir; Whelan, Maurice
2007-07-01
We report on novel hyper-spectral imaging filter-modules based on acousto-optic tuneable filters (AOTF). The AOTF functions as a full-field tuneable bandpass filter which offers fast continuous or random access tuning with high filtering efficiency. Due to the diffractive nature of the device, the unfiltered zero-order and the filtered first-order images are geometrically separated. The modules developed exploit this feature to simultaneously route both the transmitted white-light image and the filtered fluorescence image to two separate cameras. Incorporation of prisms in the optical paths and careful design of the relay optics in the filter module have overcome a number of aberrations inherent to imaging through AOTFs, leading to excellent spatial resolution. A number of practical uses of this technique, both for in vivo auto-fluorescence endoscopy and in vitro fluorescence microscopy were demonstrated. We describe the operational principle and design of recently improved prototype instruments for fluorescence-based diagnostics and demonstrate their performance by presenting challenging hyper-spectral fluorescence imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baers, L.B.; Gutierrez, T.R.; Mendoza, R.A.
1993-08-01
The second (conventional variance or Campbell signal) , , , the third , and the modified fourth order [minus] 3*[sup 2] etc. central signal moments associated with the amplified (K) and filtered currents [i[sub 1], i[sub 2], x = K * (i[sub 2]-),] from two electrodes of an ex-core neutron sensitive fission detector have been measured versus the reactor power of the 1 MW TRIGA reactor in Mexico City. Two channels of a high speed (400 kHz) multiplexing data sampler and A/D converter with 12 bit resolution and one megawords buffer memory were used. The data were further retrieved intomore » a PC and estimates for auto- and cross-correlation moments up to the fifth order, coherence (/[radical]), skewness (/([radical]/)[sup 3]), excess (/[sup 2] - 3) etc. quantities were calculated off-line. A five mode operation of the detector was achieved including the conventional counting rates and currents in agreement with the theory and the authors previous results with analogue techniques. The signals were proportional to the neutron flux and reactor power in some flux ranges. The suppression of background noise is improved and the lower limit of the measurement range is extended as the order of moment is increased, in agreement with the theory. On the other hand the statistical uncertainty is increased. At increasing flux levels it was statistically more difficult to obtain flux estimates based on the higher order ([>=]3) moments.« less
A high temperature superconductor notch filter for the Sardinia Radio Telescope
NASA Astrophysics Data System (ADS)
Bolli, Pietro; Cresci, Luca; Huang, Frederick; Mariotti, Sergio; Panella, Dario
2018-04-01
A High Temperature Superconductor filter operating in the C-band between 4200 and 5600 MHz has been developed for one of the radio astronomical receivers of the Sardinia Radio Telescope. The motivation was to attenuate an interference from a weather radar at 5640 MHz, whose power level exceeds the linear region of the first active stages of the receiver. A very sharp transition after the nominal maximum passband frequency is reached by combining a 6th order band-pass filter with a 6th order stop-band. This solution is competitive with an alternative layout based on a cascaded triplet filter. Three units of the filter have been measured with two different calibration approaches to investigate pros and cons of each, and data repeatability. The final performance figures of the filters are: ohmic losses of the order of 0.15-0.25 dB, matching better than -15 dB, and -30 dB attenuation at 5640 MHz. Finally, a more accurate model of the connection between external connector and microstrip shows a better agreement between simulations and experimental data.
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
Influence of cone beam CT enhancement filters on diagnosis ability of longitudinal root fractures
Nascimento, M C C; Nejaim, Y; de Almeida, S M; Bóscolo, F N; Haiter-Neto, F; Sobrinho, L C
2014-01-01
Objectives: To determine whether cone beam CT (CBCT) enhancement filters influence the diagnosis of longitudinal root fractures. Methods: 40 extracted human posterior teeth were endodontically prepared, and fractures with no separation of fragments were made in 20 teeth of this sample. The teeth were placed in a dry mandible and scanned using a Classic i-CAT® CBCT device (Imaging Sciences International, Inc., Hatfield, PA). Evaluations were performed with and without CBCT filters (Sharpen Mild, Sharpen Super Mild, S9, Sharpen, Sharpen 3 × 3, Angio Sharpen Medium 5 × 5, Angio Sharpen High 5 × 5 and Shadow 3 × 3) by three oral radiologists. Inter- and intraobserver agreement was calculated by the kappa test. Accuracy, sensitivity, specificity and positive and negative predictive values were determined. McNemar test was applied for agreement between all images vs the gold standard and original images vs images with filters (p < 0.05). Results: Means of intraobserver agreement ranged from good to excellent. Angio Sharpen Medium 5 × 5 filter obtained the highest positive predictive value (80.0%) and specificity value (76.5%). Angio Sharpen High 5 × 5 filter obtained the highest sensitivity (78.9%) and accuracy (77.5%) value. Negative predictive value was the highest (82.9%) for S9 filter. The McNemar test showed no statistically significant differences between images with and without CBCT filters (p > 0.05). Conclusions: Although no statistical differences was observed in the diagnosis of root fractures when using filters, these filters seem to improve diagnostic capacity for longitudinal root fractures. Further in vitro studies with endodontic-treated teeth and research in vivo should be considered. PMID:24408819
NASA Astrophysics Data System (ADS)
Dietrich, Volker; Hartmann, Peter; Kerz, Franca
2015-03-01
Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.
Xiao, Keke; Chen, Yun; Jiang, Xie; Zhou, Yan
2017-03-01
An investigation was conducted for 20 different types of sludge in order to identify the key organic compounds in extracellular polymeric substances (EPS) that are important in assessing variations of sludge filterability. The different types of sludge varied in initial total solids (TS) content, organic composition and pre-treatment methods. For instance, some of the sludges were pre-treated by acid, ultrasonic, thermal, alkaline, or advanced oxidation technique. The Pearson's correlation results showed significant correlations between sludge filterability and zeta potential, pH, dissolved organic carbon, protein and polysaccharide in soluble EPS (SB EPS), loosely bound EPS (LB EPS) and tightly bound EPS (TB EPS). The principal component analysis (PCA) method was used to further explore correlations between variables and similarities among EPS fractions of different types of sludge. Two principal components were extracted: principal component 1 accounted for 59.24% of total EPS variations, while principal component 2 accounted for 25.46% of total EPS variations. Dissolved organic carbon, protein and polysaccharide in LB EPS showed higher eigenvector projection values than the corresponding compounds in SB EPS and TB EPS in principal component 1. Further characterization of fractionized key organic compounds in LB EPS was conducted with size-exclusion chromatography-organic carbon detection-organic nitrogen detection (LC-OCD-OND). A numerical multiple linear regression model was established to describe relationship between organic compounds in LB EPS and sludge filterability. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter
Restoration of MRI data for intensity non-uniformities using local high order intensity statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2008-01-01
MRI at high magnetic fields (>3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to non-biological intensity non-uniformities across the image. They can complicate further image analysis such as registration and tissue segmentation. Existing methods for intensity uniformity restoration have been optimized for 1.5 T, but they are less effective for 3.0 T MRI, and not at all satisfactory for higher fields. Also, many of the existing restoration algorithms require a brain template or use a prior atlas, which can restrict their practicalities. In this study an effective intensity uniformity restoration algorithm has been developed based on non-parametric statistics of high order local intensity co-occurrences. These statistics are restored with a non-stationary Wiener filter. The algorithm also assumes a smooth non-uniformity and is stable. It does not require a prior atlas and is robust to variations in anatomy. In geriatric brain imaging it is robust to variations such as enlarged ventricles and low contrast to noise ratio. The co-occurrence statistics improve robustness to whole head images with pronounced non-uniformities present in high field acquisitions. Its significantly improved performance and lower time requirements have been demonstrated by comparing it to the very commonly used N3 algorithm on BrainWeb MR simulator images as well as on real 4 T human head images. PMID:18621568
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Attitude Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.
Low-power G m-C filter employing current-reuse differential difference amplifiers
Mincey, John S.; Briseno-Vidrios, Carlos; Silva-Martinez, Jose; ...
2016-08-10
This study deals with the design of low-power, high performance, continuous-time filters. The proposed OTA architecture employs current-reuse differential difference amplifiers in order to produce more power efficient Gm-C filter solutions. To demonstrate this, a 6th order low-pass Butterworth filter was designed in 0.18 m CMOS achieving a 65-MHz -3-dB frequency, an in-band input-referred third-order intercept point of 12.0 dBm, and an input referred noise density of 40 nV/Hz1=2, while only consuming 8.07 mW from a 1.8 V supply and occupying a total chip area of 0.21 mm2 with a power consumption of only 1.19 mW per pole.
Low-power G m-C filter employing current-reuse differential difference amplifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mincey, John S.; Briseno-Vidrios, Carlos; Silva-Martinez, Jose
This study deals with the design of low-power, high performance, continuous-time filters. The proposed OTA architecture employs current-reuse differential difference amplifiers in order to produce more power efficient Gm-C filter solutions. To demonstrate this, a 6th order low-pass Butterworth filter was designed in 0.18 m CMOS achieving a 65-MHz -3-dB frequency, an in-band input-referred third-order intercept point of 12.0 dBm, and an input referred noise density of 40 nV/Hz1=2, while only consuming 8.07 mW from a 1.8 V supply and occupying a total chip area of 0.21 mm2 with a power consumption of only 1.19 mW per pole.
Experiments with explicit filtering for LES using a finite-difference method
NASA Technical Reports Server (NTRS)
Lund, T. S.; Kaltenbach, H. J.
1995-01-01
The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.
NASA Astrophysics Data System (ADS)
Kalyashova, Zoya N.
2017-11-01
A new approach to UV holographic filter's manufacturing, when the filters are the volume reflection holograms, working in UV region in the second Bragg diffraction order, is offered. The method is experimentally realized for wavelength of 266 nm.
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
Online sales: profit without question.
Bryant, J A; Cody, M J; Murphy, S T
2002-09-01
To examine the ease with which underage smokers can purchase cigarettes online using money orders and to evaluate the effectiveness of internet filtering programs in blocking access to internet cigarette vendors (ICVs). Four young people purchased 32 money orders using 32 different names to buy one carton of cigarettes for each named individual. Each money order was subsequently mailed to a different ICV in the USA. No age related information accompanied these online orders. Two internet filtering programs ("Bess" and filtertobacco.org) were tested for their relative efficacy in blocking access to ICV sites. Of the 32 orders placed, four orders never reached the intended ICV. Of the remaining 28 orders, 20 (71%) were filled despite a lack of age verification. Only four (14%) of the orders received were rejected because they lacked proof of age. "Bess" blocked access to 84% and filtertobacco.org to 94% of the ICV sites. Although underage smokers can easily purchase cigarettes online using money orders, access to these sites can be largely blocked if appropriate filtering devices are installed.
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
NASA Astrophysics Data System (ADS)
Borghesani, P.; Pennacchi, P.; Ricci, R.; Chatterton, S.
2013-10-01
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
LATTE Linking Acoustic Tests and Tagging Using Statistical Estimation
2015-09-30
the complexity of the model: (from simplest to most complex) Kalman filter , Markov chain Monte-Carlo (MCMC) and ABC. Many of these methods have been...using SMMs fitted using Kalman filters . Therefore, using the DTAG data, we can estimate the distributions associated with 2D horizontal displacement...speed (a key problem in the previous Kalman filter implementation). This new approach also allows the animal’s horizontal movement direction to differ
Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H
2012-06-01
In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Bindiya T., S.; Elias, Elizabeth
2015-01-01
In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.
Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian
2016-02-01
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.
A Computer Model of a Phase Lock Loop
NASA Technical Reports Server (NTRS)
Shelton, Ralph Paul
1973-01-01
A computer model is reported of a PLL (phase-lock loop), preceded by a bandpass filter, which is valid when the bandwidth of the bandpass filter is of the same order of magnitude as the natural frequency of the PLL. New results for the PLL natural frequency equal to the bandpass filter bandwidth are presented for a second order PLL operating with carrier plus noise as the input. However, it is shown that extensions to higher order loops, and to the case of a modulated carrier are straightforward. The new results presented give the cycle skipping rate of the PLL as a function of the input carrier to noise ratio when the PLL natural frequency is equal to the bandpass filter bandwidth. Preliminary results showing the variation of the output noise power and cycle skipping rates of the PLL as a function of the loop damping ratio for the PLL natural frequency equal to the bandpass filter bandwidth are also included.
Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter
NASA Astrophysics Data System (ADS)
Nnolim, Uche A.
2016-07-01
An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.
Statistical Analysis of speckle noise reduction techniques for echocardiographic Images
NASA Astrophysics Data System (ADS)
Saini, Kalpana; Dewal, M. L.; Rohit, Manojkumar
2011-12-01
Echocardiography is the safe, easy and fast technology for diagnosing the cardiac diseases. As in other ultrasound images these images also contain speckle noise. In some cases this speckle noise is useful such as in motion detection. But in general noise removal is required for better analysis of the image and proper diagnosis. Different Adaptive and anisotropic filters are included for statistical analysis. Statistical parameters such as Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE) calculated for performance measurement. One more important aspect that there may be blurring during speckle noise removal. So it is prefered that filter should be able to enhance edges during noise removal.
Occupational Injury and Illness Surveillance: Conceptual Filters Explain Underreporting
Azaroff, Lenore S.; Levenstein, Charles; Wegman, David H.
2002-01-01
Occupational health surveillance data are key to effective intervention. However, the US Bureau of Labor Statistics survey significantly underestimates the incidence of work-related injuries and illnesses. Researchers supplement these statistics with data from other systems not designed for surveillance. The authors apply the filter model of Webb et al. to underreporting by the Bureau of Labor Statistics, workers’ compensation wage-replacement documents, physician reporting systems, and medical records of treatment charged to workers’ compensation. Mechanisms are described for the loss of cases at successive steps of documentation. Empirical findings indicate that workers repeatedly risk adverse consequences for attempting to complete these steps, while systems for ensuring their completion are weak or absent. PMID:12197968
Second-order discrete Kalman filtering equations for control-structure interaction simulations
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith; Alvin, Kenneth F.
1991-01-01
A general form for the first-order representation of the continuous, second-order linear structural dynamics equations is introduced in order to derive a corresponding form of first-order Kalman filtering equations (KFE). Time integration of the resulting first-order KFE is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete KFE involving only symmetric, N x N solution matrix.
Filtration effects on ball bearing life and condition in a contaminated lubricant
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.; Moyer, D. W.
1978-01-01
Ball bearings were fatigue tested with a noncontaminated lubricant and with a contaminated lubricant under four levels of filtration. The test filters had absolute particle removal ratings of 3, 30, 49, and 105 microns. Aircraft turbine engine contaminants were injected into the filter's supply line at a constant rate of 125 milligrams per bearing hour. Bearing life and running track condition generally improved with finer filtration. The experimental lives of 3 and 30 micron filter bearings were statistically equivalent, approaching those obtained with the noncontaminated lubricant bearings. Compared to these bearings, the lives of the 49 micron bearings were statistically lower. The 105 micron bearings experienced gross wear. The degree of surface distress, weight loss, and probable failure mode were dependent on filtration level, with finer filtration being clearly beneficial.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He
2016-11-20
Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4 rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
A Priori Analyses of Three Subgrid-Scale Models for One-Parameter Families of Filters
NASA Technical Reports Server (NTRS)
Pruett, C. David; Adams, Nikolaus A.
1998-01-01
The decay of isotropic turbulence a compressible flow is examined by direct numerical simulation (DNS). A priori analyses of the DNS data are then performed to evaluate three subgrid-scale (SGS) models for large-eddy simulation (LES): a generalized Smagorinsky model (M1), a stress-similarity model (M2), and a gradient model (M3). The models exploit one-parameter second- or fourth-order filters of Pade type, which permit the cutoff wavenumber k(sub c) to be tuned independently of the grid increment (delta)x. The modeled (M) and exact (E) SGS-stresses are compared component-wise by correlation coefficients of the form C(E,M) computed over the entire three-dimensional fields. In general, M1 correlates poorly against exact stresses (C < 0.2), M3 correlates moderately well (C approx. 0.6), and M2 correlates remarkably well (0.8 < C < 1.0). Specifically, correlations C(E, M2) are high provided the grid and test filters are of the same order. Moreover, the highest correlations (C approx.= 1.0) result whenever the grid and test filters are identical (in both order and cutoff). Finally, present results reveal the exact SGS stresses obtained by grid filters of differing orders to be only moderately well correlated. Thus, in LES the model should not be specified independently of the filter.
NASA Astrophysics Data System (ADS)
Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.
2012-12-01
The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
Braile vena cava filter and greenfield filter in terms of centralization.
de Godoy, José Maria Pereira; Menezes da Silva, Adinaldo A; Reis, Luis Fernando; Miquelin, Daniel; Torati, José Luis Simon
2013-01-01
The aim of this study was to evaluate complications experienced during implantation of the Braile Vena Cava filter (VCF) and the efficacy of the centralization mechanism of the filter. This retrospective cohort study evaluated all Braile Biomédica VCFs implanted from 2004 to 2009 in Hospital de Base Medicine School in São José do Rio Preto, Brazil. Of particular concern was the filter's symmetry during implantation and complications experienced during the procedure. All the angiographic examinations performed during the implantation of the filters were analyzed in respect to the following parameters: migration of the filter, non-opening or difficulties in the implantation and centralization of the filter. A total of 112 Braile CVFs were implanted and there were no reports of filter opening difficulties or in respect to migration. Asymmetry was observed in 1/112 (0.9%) cases. A statistically significant difference was seen on comparing historical data on decentralization of the Greenfield filter with the data of this study. The Braile Biomédico filter is an evolution of the Greenfield filter providing improved embolus capture and better implantation symmetry.
NASA Astrophysics Data System (ADS)
Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.
2014-06-01
The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.
Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep
2013-12-16
The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Woo-Ju; Lee, Hak-Soon; Lee, Sang-Shin
2012-04-01
A compact silicon nitride grating coupler with flexible bandwidth was demonstrated taking advantage of a basic grating integrated with a serially connected multistage multimode interference (MMI) filter. The spectral response could be tailored by varying the order of the MMI filter, without affecting the basic grating structure. The dependence of the spectral response of the proposed device on the order of the MMI stage was thoroughly investigated. As regards the fabricated grating coupler with a four-stage MMI filter, the observed spectral bandwidth was efficiently altered from 53 to 21 nm in the ˜1550 nm spectral band.
Implementation of a Big Data Accessing and Processing Platform for Medical Records in Cloud.
Yang, Chao-Tung; Liu, Jung-Chun; Chen, Shuo-Tsung; Lu, Hsin-Wen
2017-08-18
Big Data analysis has become a key factor of being innovative and competitive. Along with population growth worldwide and the trend aging of population in developed countries, the rate of the national medical care usage has been increasing. Due to the fact that individual medical data are usually scattered in different institutions and their data formats are varied, to integrate those data that continue increasing is challenging. In order to have scalable load capacity for these data platforms, we must build them in good platform architecture. Some issues must be considered in order to use the cloud computing to quickly integrate big medical data into database for easy analyzing, searching, and filtering big data to obtain valuable information.This work builds a cloud storage system with HBase of Hadoop for storing and analyzing big data of medical records and improves the performance of importing data into database. The data of medical records are stored in HBase database platform for big data analysis. This system performs distributed computing on medical records data processing through Hadoop MapReduce programming, and to provide functions, including keyword search, data filtering, and basic statistics for HBase database. This system uses the Put with the single-threaded method and the CompleteBulkload mechanism to import medical data. From the experimental results, we find that when the file size is less than 300MB, the Put with single-threaded method is used and when the file size is larger than 300MB, the CompleteBulkload mechanism is used to improve the performance of data import into database. This system provides a web interface that allows users to search data, filter out meaningful information through the web, and analyze and convert data in suitable forms that will be helpful for medical staff and institutions.
Explicit filtering in large eddy simulation using a discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Brazell, Matthew J.
The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.
Chen, Jonathan H; Podchiyska, Tanya
2016-01-01
Objective: To answer a “grand challenge” in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or Amazon.com’s product recommender. Materials and Methods: EMR data were extracted from 1 year of hospitalizations (>18K patients with >5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the ∼1.5K most common items to drive an order recommender. The authors assessed the recommender’s ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Results: Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P < 10−10) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P < 10−16). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Discussion: Recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from “correct” ones. Conclusions: Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. “interesting” suggestions). PMID:26198303
Initial Alignment of Large Azimuth Misalignment Angles in SINS Based on Adaptive UPF
Sun, Jin; Xu, Xiao-Su; Liu, Yi-Ting; Zhang, Tao; Li, Yao
2015-01-01
The case of large azimuth misalignment angles in a strapdown inertial navigation system (SINS) is analyzed, and a method of using the adaptive UPF for the initial alignment is proposed. The filter is based on the idea of a strong tracking filter; through the introduction of the attenuation memory factor to effectively enhance the corrections of the current information residual error on the system, it reduces the influence on the system due to the system simplification, and the uncertainty of noise statistical properties to a certain extent; meanwhile, the UPF particle degradation phenomenon is better overcome. Finally, two kinds of non-linear filters, UPF and adaptive UPF, are adopted in the initial alignment of large azimuth misalignment angles in SINS, and the filtering effects of the two kinds of nonlinear filter on the initial alignment were compared by simulation and turntable experiments. The simulation and turntable experiment results show that the speed and precision of the initial alignment using adaptive UPF for a large azimuth misalignment angle in SINS under the circumstance that the statistical properties of the system noise are certain or not have been improved to some extent. PMID:26334277
Second order Pseudo-gaussian shaper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beche, Jean-Francois
2002-11-22
The purpose of this document is to provide a calculus spreadsheet for the design of second-order pseudo-gaussian shapers. A very interesting reference is given by C.H. Mosher ''Pseudo-Gaussian Transfer Functions with Superlative Recovery'', IEEE TNS Volume 23, p. 226-228 (1976). Fred Goulding and Don Landis have studied the structure of those filters and their implementation and this document will outline the calculation leading to the relation between the coefficients of the filter. The general equation of the second order pseudo-gaussian filter is: f(t) = P{sub 0} {center_dot} e{sup -3kt} {center_dot} sin{sup 2}(kt). The parameter k is a normalization factor.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E.
2017-01-01
There is accumulating evidence that the brain’s neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals. PMID:28575032
On the effect of using the Shapiro filter to smooth winds on a sphere
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Balgovind, R. C.
1984-01-01
Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.
A miniature filter on a suspended substrate with a two-sided pattern of strip conductors
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Voloshin, A. S.; Bulavchuk, A. S.; Galeev, R. G.
2016-06-01
A miniature bandpass filter of new design with original stripline resonators on suspended substrate has been studied. The proposed filters of third to sixth order are distinguished for their high frequency-selective properties and mush smaller size in comparison to analogs. It is shown that a broad stopband extending above three-fold central bandpass frequency is determined by weak coupling of resonators at resonances of the second and third modes. A prototype sixth-order filter with a central frequency of 1 GHz, manufactured on a ceramic substrate with dielectric permittivity ɛ = 80, has contour dimensions of 36.6 × 4.8 × 0.5 mm3. Parametric synthesis of the filter, based on electrodynamic 3D model simulations, showed quite good agreement with the results of measurements.
Exact reconstruction analysis/synthesis filter banks with time-varying filters
NASA Technical Reports Server (NTRS)
Arrowood, J. L., Jr.; Smith, M. J. T.
1993-01-01
This paper examines some of the analysis/synthesis issues associated with FIR time-varying filter banks where the filter bank coefficients are allowed to change in response to the input signal. Several issues are identified as being important in order to realize performance gains from time-varying filter banks in image coding applications. These issues relate to the behavior of the filters as transition from one set of filter banks to another occurs. Lattice structure formulations for the time varying filter bank problem are introduced and discussed in terms of their properties and transition characteristics.
Dynamic rain fade compensation techniques for the advanced communications technology satellite
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1992-01-01
The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.
HOKF: High Order Kalman Filter for Epilepsy Forecasting Modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2017-08-01
Epilepsy forecasting has been extensively studied using high-order time series obtained from scalp-recorded electroencephalography (EEG). An accurate seizure prediction system would not only help significantly improve patients' quality of life, but would also facilitate new therapeutic strategies to manage epilepsy. This paper thus proposes an improved Kalman Filter (KF) algorithm to mine seizure forecasts from neural activity by modeling three properties in the high-order EEG time series: noise, temporal smoothness, and tensor structure. The proposed High-Order Kalman Filter (HOKF) is an extension of the standard Kalman filter, for which higher-order modeling is limited. The efficient dynamic of HOKF system preserves the tensor structure of the observations and latent states. As such, the proposed method offers two main advantages: (i) effectiveness with HOKF results in hidden variables that capture major evolving trends suitable to predict neural activity, even in the presence of missing values; and (ii) scalability in that the wall clock time of the HOKF is linear with respect to the number of time-slices of the sequence. The HOKF algorithm is examined in terms of its effectiveness and scalability by conducting forecasting and scalability experiments with a real epilepsy EEG dataset. The results of the simulation demonstrate the superiority of the proposed method over the original Kalman Filter and other existing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Dense grid sibling frames with linear phase filters
NASA Astrophysics Data System (ADS)
Abdelnour, Farras
2013-09-01
We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.
Electric filter with movable belt electrode
Bergman, W.
1983-09-20
A method and apparatus for removing airborne contaminants entrained in a gas or airstream includes an electric filter characterized by a movable endless belt electrode, a grounded electrode, and a filter medium sandwiched there between. Inclusion of the movable, endless belt electrode provides the driving force for advancing the filter medium through the filter, and reduces frictional drag on the filter medium, thereby permitting a wide choice of filter medium materials. Additionally, the belt electrode includes a plurality of pleats in order to provide maximum surface area on which to collect airborne contaminants. 4 figs.
Electric filter with movable belt electrode
Bergman, Werner
1983-01-01
A method and apparatus for removing airborne contaminants entrained in a gas or airstream includes an electric filter characterized by a movable endless belt electrode, a grounded electrode, and a filter medium sandwiched therebetween. Inclusion of the movable, endless belt electrode provides the driving force for advancing the filter medium through the filter, and reduces frictional drag on the filter medium, thereby permitting a wide choice of filter medium materials. Additionally, the belt electrode includes a plurality of pleats in order to provide maximum surface area on which to collect airborne contaminants.
Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, T.E., Fluor Daniel Hanford
A previous evaluation documented in report WHC-SD-GN-RPT-30005, Rev. 0, titled ``Evaluation on Self-Contained High Efficiency Particulate Filters,`` revealed that the SCHEPA filters do not have required documentation to be in compliance with the design, testing, and fabrication standards required in ASME N-509, ASME N-510, and MIL-F-51068. These standards are required by DOE Order 6430.IA. Without this documentation, filter adequacy cannot be verified. The existing SCHEPA filters can be removed and replaced with new filters and filter housing which meet current codes and standards.
Simplification of the Kalman filter for meteorological data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1991-01-01
The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.
Braile Vena Cava Filter and Greenfield Filter in Terms of Centralization
de Godoy, José Maria Pereira; Menezes da Silva, Adinaldo A; Reis, Luis Fernando; Miquelin, Daniel; Torati, José Luis Simon
2013-01-01
The aim of this study was to evaluate complications experienced during implantation of the Braile Vena Cava filter (VCF) and the efficacy of the centralization mechanism of the filter. This retrospective cohort study evaluated all Braile Biomédica VCFs implanted from 2004 to 2009 in Hospital de Base Medicine School in São José do Rio Preto, Brazil. Of particular concern was the filter’s symmetry during implantation and complications experienced during the procedure. All the angiographic examinations performed during the implantation of the filters were analyzed in respect to the following parameters: migration of the filter, non-opening or difficulties in the implantation and centralization of the filter. A total of 112 Braile CVFs were implanted and there were no reports of filter opening difficulties or in respect to migration. Asymmetry was observed in 1/112 (0.9%) cases. A statistically significant difference was seen on comparing historical data on decentralization of the Greenfield filter with the data of this study. The Braile Biomédico filter is an evolution of the Greenfield filter providing improved embolus capture and better implantation symmetry. PMID:23459189
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
Spacecraft Formation Control and Estimation Via Improved Relative Motion Dynamics
2017-03-30
statistical (e.g. batch least-squares or Extended Kalman Filter ) estimator. In addition, the IROD approach can be applied to classical (ground-based...covariance Test the viability of IROD solutions by injecting them into precise orbit determination schemes (e.g. various strains of Kalman filters
40 CFR 60.386 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... The sample volume for each run shall be at least 1.70 dscm (60 dscf). The sampling probe and filter... probe and filter temperature slightly above the effluent temperature (up to a maximum filter temperature of 121 °C (250 °F)) in order to prevent water condensation on the filter. (2) Method 9 and the...
Volcano plots in analyzing differential expressions with mRNA microarrays.
Li, Wentian
2012-12-01
A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.
Xiao, Liang; Huang, De-sheng; Shen, Jing; Tong, Jia-jie
2012-01-01
To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10°) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, χ(2) = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, χ(2) = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.
Retrievable Inferior vena cava filters in pregnancy: Risk versus benefit?
Crosby, David A; Ryan, Kevin; McEniff, Niall; Dicker, Patrick; Regan, Carmen; Lynch, Caoimhe; Byrne, Bridgette
2018-03-01
Venous thromboembolism remains one of the leading causes of maternal mortality in the developed world. Retrievable inferior vena cava (IVC) filters have a role in the prevention of lethal pulmonary emboli when anticoagulation is contraindicated or has failed [1]. It is unclear whether or not the physiological changes in pregnancy influence efficacy and complications of these devices. The decision to place an IVC filter in pregnancy is complex and there is limited information in terms of benefit and risk to the mother. The objective of this study was to determine the efficacy and safety of these devices in pregnancy and to compare these with rates reported in the general population. The aim of this study was report three recent cases of retrievable IVC filter use in pregnant women in our department and to perform a systematic review of the literature to identify published cases of filters in pregnancy. The efficacy and complication rates of these devices in pregnancy were estimated and compared to rates reported in the general population in a recent review [2]. Fisher's exact test was used for statistical analysis. In addition to our three cases, 16 publications were identified with retrievable IVC filter use in 40 pregnant women resulting in a total of 43 cases. There was no pulmonary embolus in the pregnant group (0/43) compared to 57/6291 (0.9%) in the general population. Thrombosis of the filter (2.3% vs. 0.9%, p = 0.33) and perforation of the IVC (7.0% vs 4.4%, p = 0.44) were more common in pregnancy compared to the general population but the difference was not statistically significant. Failure to retrieve the filter is more likely to occur in pregnancy (26% vs. 11%, p = 0.006) but this did not correlate with the type of device (p = 0.61), duration of insertion (p = 0.58) or mode of delivery (p = 0.37). Data for retrievable IVC filters in pregnancy is limited and there may be a publication bias towards complicated cases. This study shows that the filter appears to protect against PE in pregnancy but the numbers are small. Complications such as filter thrombosis and IVC penetration appear to be higher in pregnancy but this difference is not statistically significant. It is not possible to retrieve the device in one out of every four pregnant women. This has implications in terms of long term risk of lower limb thrombosis and post thrombotic syndrome. The decision to use an IVC filter in pregnancy needs careful consideration by a multidisciplinary team. The benefit and risk assessment should be individualised and clearly outlined to the patient. Copyright © 2017 Elsevier B.V. All rights reserved.
Filtration effects on ball bearing life and condition in a contaminated lubricant
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.; Moyer, D. W.
1978-01-01
Ball bearings were fatigue tested with a noncontaminated MIL-L-23699 lubricant and with a contaminated MIL-L-23699 lubricant under four levels of filtration. The test filters had absolute particle removal ratings of 3, 30, 49, and 105 microns. Aircraft turbine engine contaminants were injected into the filter's supply line at a constant rate of 125 milligrams per bearing hour. Bearing life and running track condition generally improved with finer filtration. The experimental lives of 3- and 30-micron filter bearings were statistically equivalent, approaching those obtained with the noncontaminated lubricant bearings. Compared to these bearings, the lives of the 49-micron bearings were statistically lower. The 105-micron bearings experienced gross wear. The degree of surface distress, weight loss, and probable failure mode were dependent on filtration level, with finer filtration being clearly beneficial.
Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering
NASA Astrophysics Data System (ADS)
van de Walle, A.; Naets, F.; Desmet, W.
2018-05-01
This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.
Hidden Markov model tracking of continuous gravitational waves from young supernova remnants
NASA Astrophysics Data System (ADS)
Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.
2018-02-01
Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.
An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data.
Della-Maggiore, Valeria; Chau, Wilkin; Peres-Neto, Pedro R; McIntosh, Anthony R
2002-09-01
We present the results from two sets of Monte Carlo simulations aimed at evaluating the robustness of some preprocessing parameters of SPM99 for the analysis of functional magnetic resonance imaging (fMRI). Statistical robustness was estimated by implementing parametric and nonparametric simulation approaches based on the images obtained from an event-related fMRI experiment. Simulated datasets were tested for combinations of the following parameters: basis function, global scaling, low-pass filter, high-pass filter and autoregressive modeling of serial autocorrelation. Based on single-subject SPM analysis, we derived the following conclusions that may serve as a guide for initial analysis of fMRI data using SPM99: (1) The canonical hemodynamic response function is a more reliable basis function to model the fMRI time series than HRF with time derivative. (2) Global scaling should be avoided since it may significantly decrease the power depending on the experimental design. (3) The use of a high-pass filter may be beneficial for event-related designs with fixed interstimulus intervals. (4) When dealing with fMRI time series with short interstimulus intervals (<8 s), the use of first-order autoregressive model is recommended over a low-pass filter (HRF) because it reduces the risk of inferential bias while providing a relatively good power. For datasets with interstimulus intervals longer than 8 seconds, temporal smoothing is not recommended since it decreases power. While the generalizability of our results may be limited, the methods we employed can be easily implemented by other scientists to determine the best parameter combination to analyze their data.
Analytically solvable chaotic oscillator based on a first-order filter.
Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N
2016-02-01
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.
Analytically solvable chaotic oscillator based on a first-order filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-02-15
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less
An Exploration of Trainer Filtering Approaches
NASA Technical Reports Server (NTRS)
Hester, Patrick; Tolk, Andreas; Gadi, Sandeep; Carver, Quinn; Roland, Philippe
2011-01-01
Simutator operators face a twofold entity management problem during Live-Virtual-Constructive (LVC) training events. They first must filter potentially hundreds of thousands of simulation entities in order 10 determine which elements are necessary for optimal trainee comprehension. Secondarily, they must manage the number of entities entering the simulation from those present in the object model in order to limit the computational burden on the simulation system and prevent unnecessary entities from entering the simulation, This paper focuses on the first filtering stage and describes a novel approach to entity filtering undertaken to maximize trainee awareness and learning. The feasibility of this novel approach is demonstrated on a case study and limitations to the proposed approach and future work are discussed.
Efficient composite broadband polarization retarders and polarization filters
NASA Astrophysics Data System (ADS)
Dimova, E.; Ivanov, S. S.; Popkirov, G.; Vitanov, N. V.
2014-12-01
A new type of broadband polarization half-wave retarder and narrowband polarization filters are described and experimentally tested. Both, the retarders and the filters are designed as composite stacks of standard optical half-wave plates, each of them twisted at specific angles. The theoretical background of the proposed optical devices was obtained by analogy with the method of composite pulses, known from the nuclear and quantum physics. We show that combining two composite filters built from different numbers and types of waveplates, the transmission spectrum is reduced from about 700 nm to about 10 nm width.We experimentally demonstrate that this method can be applied to different types of waveplates (broadband, zero-order, multiple order, etc.).
A flexible curvilinear electromagnetic filter for direct current cathodic arc source.
Dai, Hua; Shen, Yao; Li, Liuhe; Li, Xiaoling; Cai, Xun; Chu, Paul K
2007-09-01
Widespread applications of direct current (dc) cathodic arc deposition are hampered by macroparticle (MP) contamination, although a cathodic arc offers many unique merits such as high ionization rate, high deposition rate, etc. In this work, a flexible curvilinear electromagnetic filter is described to eliminate MPs from a dc cathodic arc source. The filter which has a relatively large size with a minor radius of about 85 mm is suitable for large cathodes. The filter is open and so the MPs do not rebound inside the filter. The flexible design allows the ions to be transported from the cathode to the sample surface optimally. Our measurements with a saturated ion current probe show that the efficiency of this flexible filter reaches about 2.0% (aluminum cathode) when the filter current is about 250 A. The MP density measured from TiN films deposited using this filter is two to three orders of magnitude less than that from films deposited with a 90 degrees duct magnetic filter and three to four orders of magnitude smaller than those deposited without a filter. Furthermore, our experiments reveal that the potential of the filter coil and the magnetic field on the surface of the cathode are two important factors affecting the efficacy of the filter. Different biasing potentials can enhance the efficiency to up to 12-fold, and a magnetic field at about 4.0 mT can improve it by a factor of 2 compared to 5.4 mT.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler
2017-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) performs orbit determination (OD) for the Aqua and Aura satellites. Both satellites are located in low Earth orbit (LEO), and are part of what is considered the A-Train satellite constellation. Both spacecraft are currently in the science phase of their respective missions. The FDF has recently been tasked with delivering definitive covariance for each satellite.The main source of orbit determination used for these missions is the Orbit Determination Toolkit developed by Analytical Graphics Inc. (AGI). This software uses an Extended Kalman Filter (EKF) to estimate the states of both spacecraft. The filter incorporates force modelling, ground station and space network measurements to determine spacecraft states. It also generates a covariance at each measurement. This covariance can be useful for evaluating the overall performance of the tracking data measurements and the filter itself. An accurate covariance is also useful for covariance propagation which is utilized in collision avoidance operations. It is also valuable when attempting to determine if the current orbital solution will meet mission requirements in the future.This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The Chi-square statistic is calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance.For the EKF to correctly calculate the covariance, error models associated with tracking data measurements must be accurately tuned. Over estimating or under estimating these error values can have detrimental effects on the overall filter performance. The filter incorporates ground station measurements, which can be tuned based on the accuracy of the individual ground stations. It also includes measurements from the NASA space network (SN), which can be affected by the assumed accuracy of the TDRS satellite state at the time of the measurement.The force modelling in the EKF is also an important factor that affects the propagation accuracy and covariance sizing. The dominant force in the LEO orbit regime is the drag force caused by atmospheric drag. Accurate accounting of the drag force is especially important for the accuracy of the propagated state. The implementation of a box and wing model to improve drag estimation accuracy, and its overall effect on the covariance state is explored.The process of tuning the EKF for Aqua and Aura support is described, including examination of the measurement errors of available observation types (Doppler and range), and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-square statistic, calculated based of the ODTK EKF solutions, are assessed versus accepted norms for the orbit regime.
Student understanding of first order RC filters
NASA Astrophysics Data System (ADS)
Coppens, Pieter; Van den Bossche, Johan; De Cock, Mieke
2017-12-01
A series of interviews with second year electronics engineering students showed several problems with understanding first-order RC filters. To better explore how widespread these problems are, a questionnaire was administered to over 150 students in Belgium. One question asked to rank the output voltage of a low-pass filter with an AC or DC input signal while a second asked to rank the output voltages of a high-pass filter with doubled or halved resistor and capacitor values. In addition to a discussion of the rankings and students' consistency, the results are compared to the most common reasoning patterns students used to explain their rankings. Despite lecture and laboratory instruction, students not only rarely recognize the circuits as filters, but also fail to correctly apply Kirchhoff's laws and Ohm's law to arrive at a correct answer.
Low-loss lateral-extensional piezoelectric filters on ultrananocrystalline diamond.
Fatemi, Hediyeh; Abdolvand, Reza
2013-09-01
In this work, lateral-extensional thin-film piezoelectric- on-diamond (TPoD) filters with very low insertion loss (IL) values (<4 dB) are reported. Two different lateral-extensional modes of a resonant structure are coupled together to realize a two-pole filter. The filters of this work exhibit low IL values, with fractional bandwidth between 0.08% and 0.2%, and have a very small footprint. This paper reports on the lowest IL in the literature for lateral-extensional thin-film piezoelectric filters with 50 Ω terminations in the GSM frequency band (~900 MHz). The narrow-band filters of this work are fabricated on three ultrananocrystalline diamond substrates to achieve higher frequencies without excessive reduction in the feature size. The paper also thoroughly studies the parameters that affect the performance of such filters and then discussions are evaluated by the statistical data collected from the fabricated wafers.
Improving immunization of programmable logic controllers using weighted median filters.
Paredes, José L; Díaz, Dhionel
2005-04-01
This paper addresses the problem of improving immunization of programmable logic controllers (PLC's) to electromagnetic interference with impulsive characteristics. A filtering structure, based on weighted median filters, that does not require additional hardware and can be implemented in legacy PLC's is proposed. The filtering operation is implemented in the binary domain and removes the impulsive noise presented in the discrete input adding thus robustness to PLC's. By modifying the sampling clock structure, two variants of the filter are obtained. Both structures exploit the cyclic nature of the PLC to form an N-sample observation window of the discrete input, hence a status change on it is determined by the filter output taking into account all the N samples avoiding thus that a single impulse affects the PLC functionality. A comparative study, based on a statistical analysis, of the different filters' performances is presented.
Duirk, Stephen E; Bridenstine, David R; Leslie, Daniel C
2013-02-01
The transformation of two benzophenone UV filters (Oxybenzone and Dioxybenzone) was examined over the pH range 6-11 in the presence of excess aqueous chlorine. Under these conditions, both UV filters were rapidly transformed by aqueous chlorine just above circumneutral pH while transformation rates were significantly lower near the extremes of the pH range investigated. Observed first-order rate coefficients (k(obs)) were obtained at each pH for aqueous chlorine concentrations ranging from 10 to 75 μM. The k(obs) were used to determine the apparent second-order rate coefficient (k(app)) at each pH investigated as well as determine the reaction order of aqueous chlorine with each UV filter. The reaction of aqueous chlorine with either UV filter was found to be an overall second-order reaction, first-order with respect to each reactant. Assuming elemental stoichiometry described the reaction between aqueous chlorine and each UV filter, models were developed to determine intrinsic rate coefficients (k(int)) from the k(app) as a function of pH for both UV filters. The rate coefficients for the reaction of HOCl with 3-methoxyphenol moieties of oxybenzone (OXY) and dioxybenzone (DiOXY) were k(1,OxY) = 306 ± 81 M⁻¹s⁻¹ and k(1,DiOxY) = 154 ± 76 M⁻¹s⁻¹, respectively. The k(int) for the reaction of aqueous chlorine with the 3-methoxyphenolate forms were orders of magnitude greater than the un-ionized species, k(2,OxY) = 1.03(±0.52) × 10⁶ M⁻¹s⁻¹ and k(2_1,DiOxY) = 4.14(±0.68) × 10⁵ M⁻¹s⁻¹. Also, k(int) for the reaction of aqueous chlorine with the DiOXY ortho-substituted phenolate moiety was k(2_2,DiOxY) = 2.17(±0.30) × 10³ M⁻¹s⁻¹. Finally, chloroform formation potential for OXY and DiOXY was assessed over the pH range 6-10. While chloroform formation decreased as pH increased for OXY, chloroform formation increased as pH increased from 6 to 10 for DiOXY. Ultimate molar yields of chloroform per mole of UV filter were pH dependent; however, chloroform to UV filter molar yields at pH 8 were 0.221 CHCl₃/OXY and 0.212 CHCl₃/DiOXY. Copyright © 2012 Elsevier Ltd. All rights reserved.
Microwave active filters based on coupled negative resistance method
NASA Astrophysics Data System (ADS)
Chang, Chi-Yang; Itoh, Tatsuo
1990-12-01
A novel coupled negative resistance method for building a microwave active bandpass filter is introduced. Based on this method, four microstrip line end-coupled filters were built. Two are fixed-frequency one-pole and two-pole filters, and two are tunable one-pole and two-pole filters. In order to broaden the bandwidth of the end-coupled filter, a modified end-coupled structure is proposed. Using the modified structure, an active filter with a bandwidth up to 7.5 percent was built. All of the filters show significant passband performance improvement. Specifically, the passband bandwidth was broadened by a factor of 5 to 20.
Adaptive spatial filtering for daytime satellite quantum key distribution
NASA Astrophysics Data System (ADS)
Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.
2014-11-01
The rate of secure key generation (SKG) in quantum key distribution (QKD) is adversely affected by optical noise and loss in the quantum channel. In a free-space atmospheric channel, the scattering of sunlight into the channel can lead to quantum bit error ratios (QBERs) sufficiently large to preclude SKG. Furthermore, atmospheric turbulence limits the degree to which spatial filtering can reduce sky noise without introducing signal losses. A system simulation quantifies the potential benefit of tracking and higher-order adaptive optics (AO) technologies to SKG rates in a daytime satellite engagement scenario. The simulations are performed assuming propagation from a low-Earth orbit (LEO) satellite to a terrestrial receiver that includes an AO system comprised of a Shack-Hartmann wave-front sensor (SHWFS) and a continuous-face-sheet deformable mirror (DM). The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain waveoptics hardware emulator. Secure key generation rates are then calculated for the decoy state QKD protocol as a function of the receiver field of view (FOV) for various pointing angles. The results show that at FOVs smaller than previously considered, AO technologies can enhance SKG rates in daylight and even enable SKG where it would otherwise be prohibited as a consequence of either background optical noise or signal loss due to turbulence effects.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
SoFoCles: feature filtering for microarray classification based on gene ontology.
Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A
2010-02-01
Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.
Robust automatic line scratch detection in films.
Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick
2014-03-01
Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.
Van Dijck, Gert; Van Hulle, Marc M.
2011-01-01
The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction. PMID:22163921
Enhancement of event related potentials by iterative restoration algorithms
NASA Astrophysics Data System (ADS)
Pomalaza-Raez, Carlos A.; McGillem, Clare D.
1986-12-01
An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.
Correction of defective pixels for medical and space imagers based on Ising Theory
NASA Astrophysics Data System (ADS)
Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer
2014-09-01
We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.
Rocking filters fabricated in birefringent photonic crystal fiber
NASA Astrophysics Data System (ADS)
Statkiewicz-Barabach, Gabriela; Anuszkiewicz, Alicja; Urbanczyk, Waclaw; Wojcik, Jan
2008-12-01
We demonstrate an efficient higher order rocking filter, which resonantly couples polarization modes guided in birefringent photonic crystal fibers. The grating was inscribed in the birefringent fiber with two large holes adjacent to the core by periodic mechanical twisting and heating with an arc fusion splicer. Because in photonic crystal fibers the phase birefringence is very dispersive and increases against wavelength, the phase matching between coupled modes can be obtained simultaneously at several wavelengths. In particular, we demonstrate that for the grating period Λ =8 mm, resonant coupling can be obtained at three different wavelengths. The first order coupling (-13dB) is obtained for Λ = LB . This condition is fulfilled at λ = 856 nm. The second order coupling (-20dB) is obtained for Λ = 2LB at λ =1270 nm and the third order coupling (- 17dB) occurs for Λ = 3LB at λ =1623 nm. The length of the filter was 9.6 cm, which corresponds to 13 periodic twists. We also present the results of sensitivity measurements of this filter to hydrostatic pressure and temperature.
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.
Zhu, Wei; Wang, Wei; Yuan, Gannan
2016-06-01
In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).
Femtosecond pulse inscription of a selective mode filter in large mode area fibers
NASA Astrophysics Data System (ADS)
Krämer, Ria G.; Voigtländer, Christian; Freier, Erik; Liem, Andreas; Thomas, Jens U.; Richter, Daniel; Schreiber, Thomas; Tünnermann, Andreas; Nolte, Stefan
2013-02-01
We present a selective mode filter inscribed with ultrashort pulses directly into a few mode large mode area (LMA) fiber. The mode filter consists of two refractive index modifications alongside the fiber core in the cladding. The refractive index modifications, which were of approximately the same order of magnitude as the refractive index difference between core and cladding have been inscribed by nonlinear absorption of femtosecond laser pulses (800 nm wavelength, 120 fs pulse duration). If light is guided in the core, it will interact with the inscribed modifications causing modes to be coupled out of the core. In order to characterize the mode filter, we used a femtosecond inscribed fiber Bragg grating (FBG), which acts as a wavelength and therefore mode selective element in the LMA fiber. Since each mode has different Bragg reflection wavelengths, an FBG in a multimode fiber will exhibit multiple Bragg reflection peaks. In our experiments, we first inscribed the FBG using the phase mask scanning technique. Then the mode filter was inscribed. The reflection spectrum of the FBG was measured in situ during the inscription process using a supercontinuum source. The reflectivities of the LP01 and LP11 modes show a dependency on the length of the mode filter. Two stages of the filter were obtained: one, in which the LP11 mode was reduced by 60% and one where the LP01 mode was reduced by 80%. The other mode respectively showed almost no losses. In conclusion, we could selectively filter either the fundamental or higher order modes.
Individual snag detection using neighborhood attribute filtered airborne lidar data
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen
2015-01-01
The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...
A baker's dozen of new particle flows for nonlinear filters, Bayesian decisions and transport
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2015-05-01
We describe a baker's dozen of new particle flows to compute Bayes' rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.
Reduced-Order Kalman Filtering for Processing Relative Measurements
NASA Technical Reports Server (NTRS)
Bayard, David S.
2008-01-01
A study in Kalman-filter theory has led to a method of processing relative measurements to estimate the current state of a physical system, using less computation than has previously been thought necessary. As used here, relative measurements signifies measurements that yield information on the relationship between a later and an earlier state of the system. An important example of relative measurements arises in computer vision: Information on relative motion is extracted by comparing images taken at two different times. Relative measurements do not directly fit into standard Kalman filter theory, in which measurements are restricted to those indicative of only the current state of the system. One approach heretofore followed in utilizing relative measurements in Kalman filtering, denoted state augmentation, involves augmenting the state of the system at the earlier of two time instants and then propagating the state to the later time instant.While state augmentation is conceptually simple, it can also be computationally prohibitive because it doubles the number of states in the Kalman filter. When processing a relative measurement, if one were to follow the state-augmentation approach as practiced heretofore, one would find it necessary to propagate the full augmented state Kalman filter from the earlier time to the later time and then select out the reduced-order components. The main result of the study reported here is proof of a property called reduced-order equivalence (ROE). The main consequence of ROE is that it is not necessary to augment with the full state, but, rather, only the portion of the state that is explicitly used in the partial relative measurement. In other words, it suffices to select the reduced-order components first and then propagate the partial augmented state Kalman filter from the earlier time to the later time; the amount of computation needed to do this can be substantially less than that needed for propagating the full augmented Kalman state filter.
High Order Filter Methods for the Non-ideal Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Divergence Free High Order Filter Methods for the Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yea, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping
NASA Astrophysics Data System (ADS)
Nakhostin, M.
2011-10-01
This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.
EVALUATION OF FACTORS IN THE ELUTION OF HYDROCORTISONE FROM PAPER CHROMATOGRAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganis, F.M.; Hendrickson, M.W.; Giunta, P.D.
An assessment was made of a number of variable factors which affect the recovery of hydrocortisone from eluted filter paper chromatographic fractions. Factors tested included time of elution, sample concentration, rinsing of eluting fractions and pre-washing of the filter paper. It was noted that a 50 mu g sample could be quantitatively recovered after a 15-minute elution time from a pre-washed filter paper fraction. The results were subjected to a statistical analysis and were found to be highly significant. (auth)
ACS/WFC Sky Flats from Frontier Fields Imaging
NASA Astrophysics Data System (ADS)
Mack, J.; Lucas, R. A.; Grogin, N. A.; Bohlin, R. C.; Koekemoer, A. M.
2018-04-01
Parallel imaging data from the HST Frontier Fields campaign (Lotz et al. 2017) have been used to compute sky flats for the ACS/WFC detector in order to verify the accuracy of the current set of flat field reference files. By masking sources and then co-adding many deep frames, the F606W and F814W filters have enough combined background signal that from Poisson statistics are <1% per pixel. In these two filters, the sky flats show spatial residuals 1% or less. These residuals are similar in shape to the WFC flat field 'donut' pattern, in which the detector quantum efficiency tracks the thickness of the two WFC chips. Observations of blue and red calibration standards measured at various positions on the detector (Bohlin et al. 2017) confirm the fidelity of the F814W flat, with aperture photometry consistent to 1% across the FOV, regardless of spectral type. At bluer wavelengths, the total sky background is substantially lower, and the F435W sky flat shows a combination of both flat errors and detector artifacts. Aperture photometry of the red standard star shows a maximum deviation of 1.4% across the array in this filter. Larger residuals up to 2.5% are found for the blue standard, suggesting that the spatial sensitivity in F435W depends on spectral type.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
NASA Technical Reports Server (NTRS)
Ledsham, W. H.; Staelin, D. H.
1978-01-01
An extended Kalman-Bucy filter has been implemented for atmospheric temperature profile retrievals from observations made using the Scanned Microwave Spectrometer (SCAMS) instrument carried on the Nimbus 6 satellite. This filter has the advantage that it requires neither stationary statistics in the underlying processes nor linear production of the observed variables from the variables to be estimated. This extended Kalman-Bucy filter has yielded significant performance improvement relative to multiple regression retrieval methods. A multi-spot extended Kalman-Bucy filter has also been developed in which the temperature profiles at a number of scan angles in a scanning instrument are retrieved simultaneously. These multi-spot retrievals are shown to outperform the single-spot Kalman retrievals.
Modern Empirical Statistical Spectral Analysis.
1980-05-01
716-723. Akaike, H. (1977). On entropy maximization principle, Applications of Statistics, P.R. Krishnaiah , ed., North-Holland, Amsterdam, 27-41...by P. Krishnaiah , North Holland: Amsterdam, 283-295. Parzen, E. (1979). Forecasting and whitening filter estimation, TIMS Studies in the Management
The Need for Anticoagulation Following Inferior Vena Cava Filter Placement: Systematic Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Charles E.; Prochazka, Allan
Purpose. To perform a systemic review to determine the effect of anticoagulation on the rates of venous thromboembolism (pulmonary embolus, deep venous thrombosis, inferior vena cava (IVC) filter thrombosis) following placement of an IVC filter. Methods. A comprehensive computerized literature search was performed to identify relevant articles. Data were abstracted by two reviewers. Studies were included if it could be determined whether or not subjects received anticoagulation following filter placement, and if follow-up data were presented. A meta-analysis of patients from all included studies was performed. A total of 14 articles were included in the final analysis, but the datamore » from only nine articles could be used in the meta-analysis; five studies were excluded because they did not present raw data which could be analyzed in the meta-analysis. A total of 1,369 subjects were included in the final meta-analysis. Results. The summary odds ratio for the effect of anticoagulation on venous thromboembolism rates following filter deployment was 0.639 (95% CI 0.351 to 1.159, p = 0.141). There was significant heterogeneity in the results from different studies [Q statistic of 15.95 (p = 0.043)]. Following the meta-analysis, there was a trend toward decreased venous thromboembolism rates in patients with post-filter anticoagulation (12.3% vs. 15.8%), but the result failed to reach statistical significance. Conclusion. Inferior vena cava filters can be placed in patients who cannot receive concomitant anticoagulation without placing them at significantly higher risk of development of venous thromboembolism.« less
Rigatos, Gerasimos G; Rigatou, Efthymia G; Djida, Jean Daniel
2015-10-01
A method for early diagnosis of parametric changes in intracellular protein synthesis models (e.g. the p53 protein - mdm2 inhibitor model) is developed with the use of a nonlinear Kalman Filtering approach (Derivative-free nonlinear Kalman Filter) and of statistical change detection methods. The intracellular protein synthesis dynamic model is described by a set of coupled nonlinear differential equations. It is shown that such a dynamical system satisfies differential flatness properties and this allows to transform it, through a change of variables (diffeomorphism), to the so-called linear canonical form. For the linearized equivalent of the dynamical system, state estimation can be performed using the Kalman Filter recursion. Moreover, by applying an inverse transformation based on the previous diffeomorphism it becomes also possible to obtain estimates of the state variables of the initial nonlinear model. By comparing the output of the Kalman Filter (which is assumed to correspond to the undistorted dynamical model) with measurements obtained from the monitored protein synthesis system, a sequence of differences (residuals) is obtained. The statistical processing of the residuals with the use of x2 change detection tests, can provide indication within specific confidence intervals about parametric changes in the considered biological system and consequently indications about the appearance of specific diseases (e.g. malignancies).
Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns
NASA Astrophysics Data System (ADS)
Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.
2014-02-01
Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Engineering Filters for Reducing Spontaneous Emission in cQED
NASA Astrophysics Data System (ADS)
Bronn, Nicholas; Masluk, Nicholas; Srinivasan, Srikanth; Chow, Jerry; Abraham, David; Rothwell, Mary; Keefe, George; Gambetta, Jay; Steffen, Matthias; Lirakis, Chris
2014-03-01
Inserting a notch filter between a qubit and the external environment at the qubit frequency can significantly suppress spontaneous emission mediated by the cavity (``Purcell effect''). In order to realize this filtering in multi-qubit architectures, where space comes at a premium, we will present a filter with minimal space requirements. We acknowledge support from IARPA under contract W911NF-10-1-0324.
Bounding filter - A simple solution to lack of exact a priori statistics.
NASA Technical Reports Server (NTRS)
Nahi, N. E.; Weiss, I. M.
1972-01-01
Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.
Filter and Grid Resolution in DG-LES
NASA Astrophysics Data System (ADS)
Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman
2017-11-01
The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
NASA Astrophysics Data System (ADS)
Bianchi, Filippo; Thielmann, Marcel; de Arcangelis, Lucilla; Herrmann, Hans Jürgen
2018-01-01
Particle detachment bursts during the flow of suspensions through porous media are a phenomenon that can severely affect the efficiency of deep bed filters. Despite the relevance in several industrial fields, little is known about the statistical properties and the temporal organization of these events. We present experiments of suspensions of deionized water carrying quartz particles pushed with a peristaltic pump through a filter of glass beads measuring simultaneously the pressure drop, flux, and suspension solid fraction. We find that the burst size distribution scales consistently with a power law, suggesting that we are in the presence of a novel experimental realization of a self-organized critical system. Temporal correlations are present in the time series, like in other phenomena such as earthquakes or neuronal activity bursts, and also an analog to Omori's law can be shown. The understanding of burst statistics could provide novel insights in different fields, e.g., in the filter and petroleum industries.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
NASA Technical Reports Server (NTRS)
Yeh, H.-G.; Nguyen, T. M.
1994-01-01
Design, modeling, analysis, and simulation of a phase-locked loop (PLL) with a digital loop filter are presented in this article. A TMS320C25 digital signal processor (DSP) is used to implement this digital loop filter. In order to keep the compatibility, the main design goal was to replace the analog PLL (APLL) of the Deep-Space Transponder (DST) receiver breadboard's loop filter with a digital loop filter without changing anything else. This replacement results in a hybrid digital PLL (HDPLL). Both the original APLL and the designed HDPLL are Type I second-order systems. The real-time performance of the HDPLL and the receiver is provided and evaluated.
Langford, Katherine H; Reid, Malcolm J; Fjeld, Eirik; Øxnevad, Sigurd; Thomas, Kevin V
2015-07-01
Eight organic UV filters and stabilizers were quantitatively determined in wastewater sludge and effluent, landfill leachate, sediments, and marine and freshwater biota. Crab, prawn and cod from Oslofjord, and perch, whitefish and burbot from Lake Mjøsa were selected in order to evaluate the potential for trophic accumulation. All of the cod livers analysed were contaminated with at least 1 UV filter, and a maximum concentration of almost 12 μg/g wet weight for octocrylene (OC) was measured in one individual. 80% of the cod livers contained OC, and approximately 50% of cod liver and prawn samples contained benzophenone (BP3). Lower concentrations and detection frequencies were observed in freshwater species and the data of most interest is the 4 individual whitefish that contained both BP3 and ethylhexylmethoxycinnamate (EHMC) with maximum concentrations of almost 200 ng/g wet weight. The data shows a difference in the loads of UV filters entering receiving water dependent on the extent of wastewater treatment. Primary screening alone is insufficient for the removal of selected UV filters (BP3, Padimate, EHMC, OC, UV-234, UV-327, UV-328, UV-329). Likely due in part to the hydrophobic nature of the majority of the UV filters studied, particulate loading and organic carbon content appear to be related to concentrations of UV filters in landfill leachate and an order of magnitude difference in these parameters correlates with an order of magnitude difference in the effluent concentrations of selected UV filters (Fig. 2). From the data, it is possible that under certain low flow conditions selected organic UV filters may pose a risk to surface waters but under the present conditions the risk is low, but some UV filters will potentially accumulate through the trophic food chain. Copyright © 2015. Published by Elsevier Ltd.
Method for reducing pressure drop through filters, and filter exhibiting reduced pressure drop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alexander; Wong, Victor
Methods for generating and applying coatings to filters with porous material in order to reduce large pressure drop increases as material accumulates in a filter, as well as the filter exhibiting reduced and/or more uniform pressure drop. The filter can be a diesel particulate trap for removing particulate matter such as soot from the exhaust of a diesel engine. Porous material such as ash is loaded on the surface of the substrate or filter walls, such as by coating, depositing, distributing or layering the porous material along the channel walls of the filter in an amount effective for minimizing ormore » preventing depth filtration during use of the filter. Efficient filtration at acceptable flow rates is achieved.« less
Removing tidal-period variations from time-series data using low-pass digital filters
Walters, Roy A.; Heston, Cynthia
1982-01-01
Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.
Deconvolution of time series in the laboratory
NASA Astrophysics Data System (ADS)
John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian
2016-10-01
In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.
Characterisation of optical filters for broadband UVA radiometer
NASA Astrophysics Data System (ADS)
Alves, Luciana C.; Coelho, Carla T.; Corrêa, Jaqueline S. P. M.; Menegotto, Thiago; Ferreira da Silva, Thiago; Aparecida de Souza, Muriel; Melo da Silva, Elisama; Simões de Lima, Maurício; Dornelles de Alvarenga, Ana Paula
2016-07-01
Optical filters were characterized in order to know its suitability for use in broadband UVA radiometer head for spectral irradiance measurements. The spectral transmittance, the angular dependence and the spatial uniformity of the spectral transmittance of the UVA optical filters were investigated. The temperature dependence of the transmittance was also studied.
Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8
NASA Astrophysics Data System (ADS)
Joshi, P.
2015-12-01
Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.
Maximum a posteriori resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, John A.; Jenkins, Chris; Calder, Brian
2006-08-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.
Filtering Non-Linear Transfer Functions on Surfaces.
Heitz, Eric; Nowrouzezahrai, Derek; Poulin, Pierre; Neyret, Fabrice
2014-07-01
Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2013.102), is high performance, and has a negligible memory footprint.
Kara-Junior, Newton; Espindola, Rodrigo F; Gomes, Beatriz A F; Ventura, Bruna; Smadja, David; Santhiago, Marcony R
2011-12-01
To evaluate the possible side effects and potential protection 5 years after implantation of an intraocular lens (IOL) with a blue-light filter (yellow tinted). Ophthalmology Department, University of São Paulo, São Paulo, Brazil. Prospective randomized clinical study. Patients with bilateral visually significant cataract randomly received an ultraviolet (UV) and blue light-filtering IOL (Acrysof Natural SN60AT) in 1 eye and an acrylic UV light-filtering only IOL (Acrysof SA60AT) in the fellow eye. The primary outcome measures were contrast sensitivity, color vision, and macular findings 5 years after surgery. The study enrolled 60 eyes of 30 patients. There were no significant clinical or optical coherence tomography findings in terms of age-related macular degeneration in any eye. There were no statistically significant differences in central macular thickness between the 2 IOL groups (P=.712). There were also no significant between-group differences under photopic or scotopic conditions at any spatial frequency studied. No statistically significant differences in the color discrimination test were found between the 2 IOL groups (P=.674). After 5 years, there were no significant differences in color perception, scotopic contrast sensitivity, or photopic contrast sensitivity between the blue light-filtering (yellow-tinted) IOL and the IOL with a UV-light filter only (untinted). The potential advantage of the tinted IOL in providing protection to macular cells remains unclear. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
NASA Astrophysics Data System (ADS)
He, Haizhen; Luo, Rongming; Hu, Zhenhua; Wen, Lei
2017-07-01
A current-mode field programmable analog array(FPAA) is presented in this paper. The proposed FPAA consists of 9 configurable analog blocks(CABs) which are based on current differencing transconductance amplifiers (CDTA) and trans-impedance amplifier (TIA). The proposed CABs interconnect through global lines. These global lines contain some bridge switches, which used to reduce the parasitic capacitance effectively. High-order current-mode low-pass and band-pass filter with transmission zeros based on the simulation of general passive RLC ladder prototypes is proposed and mapped into the FPAA structure in order to demonstrate the versatility of the FPAA. These filters exhibit good performance on bandwidth. Filter's cutoff frequency can be tuned from 1.2MHz to 40MHz.The proposed FPAA is simulated in a standard Charted 0.18μm CMOS process with +/-1.2V power supply to confirm the presented theory, and the results have good agreement with the theoretical analysis.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
Kalman filter to update forest cover estimates
Raymond L. Czaplewski
1990-01-01
The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...
NTilt as an improved enhanced tilt derivative filter for edge detection of potential field anomalies
NASA Astrophysics Data System (ADS)
Nasuti, Yasin; Nasuti, Aziz
2018-07-01
We develop a new phase-based filter to enhance the edges of geological sources from potential-field data called NTilt, which utilizes the vertical derivative of the analytical signal in different orders to the tilt derivative equation. This will equalize signals from sources buried at different depths. In order to evaluate the designed filter, we compared the results obtained from our filter with those from recently applied methods, testing against both synthetic data, and measured data from the Finnmark region of North Norway were used. The results demonstrate that the new filter permits better definition of the edges of causative anomalies, as well as better highlighting several anomalies that either are not shown in tilt derivative and other methods or not very well defined. The proposed technique also shows improvements in delineation of the actual edges of deep-seated anomalies compared to tilt derivative and other methods. The NTilt filter provides more accurate and sharper edges and makes the nearby anomalies more distinguishable, and also can avoid bringing some additional false edges reducing the ambiguity in potential field interpretations. This filter, thus, appears to be promising in providing a better qualitative interpretation of the gravity and magnetic data in comparison with the more commonly used filters.
Selected-node stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Duso, Lorenzo; Zechner, Christoph
2018-04-01
Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.
A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows
NASA Astrophysics Data System (ADS)
Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi
2016-09-01
Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).
Quantum Biometrics with Retinal Photon Counting
NASA Astrophysics Data System (ADS)
Loulakis, M.; Blatsios, G.; Vrettou, C. S.; Kominis, I. K.
2017-10-01
It is known that the eye's scotopic photodetectors, rhodopsin molecules, and their associated phototransduction mechanism leading to light perception, are efficient single-photon counters. We here use the photon-counting principles of human rod vision to propose a secure quantum biometric identification based on the quantum-statistical properties of retinal photon detection. The photon path along the human eye until its detection by rod cells is modeled as a filter having a specific transmission coefficient. Precisely determining its value from the photodetection statistics registered by the conscious observer is a quantum parameter estimation problem that leads to a quantum secure identification method. The probabilities for false-positive and false-negative identification of this biometric technique can readily approach 10-10 and 10-4, respectively. The security of the biometric method can be further quantified by the physics of quantum measurements. An impostor must be able to perform quantum thermometry and quantum magnetometry with energy resolution better than 10-9ℏ , in order to foil the device by noninvasively monitoring the biometric activity of a user.
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Effect of high latitude filtering on NWP skill
NASA Technical Reports Server (NTRS)
Kalnay, E.; Takacs, L. L.; Hoffman, R. N.
1984-01-01
The high latitude filtering techniques commonly employed in global grid point models to eliminate the high frequency waves associated with the convergence of meridians, can introduce serious distortions which ultimately affect the solution at all latitudes. Experiments completed so far with the 4 deg x 5 deg, 9-level GLAS Fourth Order Model indicate that the high latitude filter currently in operation affects only minimally its forecasting skill. In one case, however, the use of pressure gradient filter significantly improved the forecast. Three day forecasts with the pressure gradient and operational filters are compared as are 5-day forecasts with no filter.
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizell, Steve A.; Shadel, Craig A.
Airborne particulates are collected at U.S. Department of Energy sites that exhibit radiological contamination on the soil surface to help assess the potential for wind to transport radionuclides from the contamination sites. Collecting these samples was originally accomplished by drawing air through a cellulose-fiber filter. These filters were replaced with glass-fiber filters in March 2011. Airborne particulates were collected side by side on the two filter materials between May 2013 and May 2014. Comparisons of the sample mass and the radioactivity determinations for the side-by-side samples were undertaken to determine if the change in the filter medium produced significant results.more » The differences in the results obtained using the two filter types were assessed visually by evaluating the time series and correlation plots and statistically by conducting a nonparametric matched-pair sign test. Generally, the glass-fiber filters collect larger samples of particulates and produce higher radioactivity values for the gross alpha, gross beta, and gamma spectroscopy analyses. However, the correlation between the radioanalytical results for the glass-fiber filters and the cellulose-fiber filters was not strong enough to generate a linear regression function to estimate the glass-fiber filter sample results from the cellulose-fiber filter sample results.« less
NASA Astrophysics Data System (ADS)
Li, Peng; Zong, Yichen; Zhang, Yingying; Yang, Mengmeng; Zhang, Rufan; Li, Shuiqing; Wei, Fei
2013-03-01
We fabricated depth-type hierarchical CNT/quartz fiber (QF) filters through in situ growth of CNTs upon quartz fiber (QF) filters using a floating catalyst chemical vapor deposition (CVD) method. The filter specific area of the CNT/QF filters is more than 12 times higher than that of the pristine QF filters. As a result, the penetration of sub-micron aerosols for CNT/QF filters is reduced by two orders of magnitude, which reaches the standard of high-efficiency particulate air (HEPA) filters. Simultaneously, due to the fluffy brush-like hierarchical structure of CNTs on QFs, the pore size of the hybrid filters only has a small increment. The pressure drop across the CNT/QF filters only increases about 50% with respect to that of the pristine QF filters, leading to an obvious increased quality factor of the CNT/QF filters. Scanning electron microscope images reveal that CNTs are very efficient in capturing sub-micron aerosols. Moreover, the CNT/QF filters show high water repellency, implying their superiority for applications in humid conditions.We fabricated depth-type hierarchical CNT/quartz fiber (QF) filters through in situ growth of CNTs upon quartz fiber (QF) filters using a floating catalyst chemical vapor deposition (CVD) method. The filter specific area of the CNT/QF filters is more than 12 times higher than that of the pristine QF filters. As a result, the penetration of sub-micron aerosols for CNT/QF filters is reduced by two orders of magnitude, which reaches the standard of high-efficiency particulate air (HEPA) filters. Simultaneously, due to the fluffy brush-like hierarchical structure of CNTs on QFs, the pore size of the hybrid filters only has a small increment. The pressure drop across the CNT/QF filters only increases about 50% with respect to that of the pristine QF filters, leading to an obvious increased quality factor of the CNT/QF filters. Scanning electron microscope images reveal that CNTs are very efficient in capturing sub-micron aerosols. Moreover, the CNT/QF filters show high water repellency, implying their superiority for applications in humid conditions. Electronic supplementary information (ESI) available: Schematic of the synthesis process of the CNT/QF filter; typical size distribution of atomized polydisperse NaCl aerosols used for air filtration testing; images of a QF filter and a CNT/QF filter; SEM image of a CNT/QF filter after 5 minutes of sonication in ethanol; calculation of porosity and filter specific area. See DOI: 10.1039/c3nr34325a
A Game Theoretic Fault Detection Filter
NASA Technical Reports Server (NTRS)
Chung, Walter H.; Speyer, Jason L.
1995-01-01
The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.
Ultraviolet filters in stomatopod crustaceans: diversity, ecology and evolution.
Bok, Michael J; Porter, Megan L; Cronin, Thomas W
2015-07-01
Stomatopod crustaceans employ unique ultraviolet (UV) optical filters in order to tune the spectral sensitivities of their UV-sensitive photoreceptors. In the stomatopod species Neogonodactylus oerstedii, we previously found four filter types, produced by five distinct mycosporine-like amino acid pigments in the crystalline cones of their specialized midband ommatidial facets. This UV-spectral tuning array produces receptors with at least six distinct spectral sensitivities, despite expressing only two visual pigments. Here, we present a broad survey of these UV filters across the stomatopod order, examining their spectral absorption properties in 21 species from seven families in four superfamilies. We found that UV filters are present in three of the four superfamilies, and evolutionary character reconstruction implies that at least one class of UV filter was present in the ancestor of all modern stomatopods. Additionally, postlarval stomatopods were observed to produce the UV filters simultaneously alongside development of the adult eye. The absorbance properties of the filters are consistent within a species; however, between species we found a great deal of diversity, both in the number of filters and in their spectral absorbance characteristics. This diversity correlates with the habitat depth ranges of these species, suggesting that species living in shallow, UV-rich environments may tune their UV spectral sensitivities more aggressively. We also found additional, previously unrecognized UV filter types in the crystalline cones of the peripheral eye regions of some species, indicating the possibility for even greater stomatopod visual complexity than previously thought. © 2015. Published by The Company of Biologists Ltd.
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Performance Evaluation of Two Different Industrial Foam Filters with LiMCA II Data
NASA Astrophysics Data System (ADS)
Syvertsen, Martin; Bao, Sarina
2015-04-01
Plant-scale filtration experiments with molten aluminum have been carried out with two different types of 10 × 10 × 2 in, 30 ppi ceramic foam filters. The filters were produced in the same production line where the only difference was the composition of the ceramic slurry used for the filter production. The inclusion contents in the aluminum melt before and after the filters were measured with two constantly running liquid metal cleanliness analyzer (LiMCA) II units. Three methods for analyzing the recorded data are presented. A significant difference in the filtration performance as function of time was found when settling of inclusions in the melt was taken into account. Statistical treatment of the time dependent LiMCA II data was performed.
Daryasafar, Navid; Baghbani, Somaye; Moghaddasi, Mohammad Naser; Sadeghzade, Ramezanali
2014-01-01
We intend to design a broadband band-pass filter with notch-band, which uses coupled transmission lines in the structure, using new models of coupled transmission lines. In order to realize and present the new model, first, previous models will be simulated in the ADS program. Then, according to the change of their equations and consequently change of basic parameters of these models, optimization and dependency among these parameters and also their frequency response are attended and results of these changes in order to design a new filter are converged.
Kumar, M; Mishra, S K
2017-01-01
The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.
The TileCal Online Energy Estimation for the Next LHC Operation Period
NASA Astrophysics Data System (ADS)
Sotto-Maior Peralva, B.; ATLAS Collaboration
2015-05-01
The ATLAS Tile Calorimeter (TileCal) is the detector used in the reconstruction of hadrons, jets and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It covers the central part of the ATLAS detector (|η| < 1.6). The energy deposited by the particles is read out by approximately 5,000 cells, with double readout channels. The signal provided by the readout electronics for each channel is digitized at 40 MHz and its amplitude is estimated by an optimal filtering algorithm, which expects a single signal with a well-defined shape. However, the LHC luminosity is expected to increase leading to pile-up that deforms the signal of interest. Due to limited resources, the current hardware setup, which is based on Digital Signal Processors (DSP), does not allow the implementation of sophisticated energy estimation methods that deal with the pile-up. Therefore, the technique to be employed for online energy estimation in TileCal for next LHC operation period must be based on fast filters such as the Optimal Filter (OF) and the Matched Filter (MF). Both the OF and MF methods envisage the use of the background second order statistics in its design, more precisely the covariance matrix. However, the identity matrix has been used to describe this quantity. Although this approximation can be valid for low luminosity LHC, it leads to biased estimators under pile- up conditions. Since most of the TileCal cell present low occupancy, the pile-up, which is often modeled by a non-Gaussian distribution, can be seen as outlier events. Consequently, the classical covariance matrix estimation does not describe correctly the second order statistics of the background for the majority of the events, as this approach is very sensitive to outliers. As a result, the OF (or MF) coefficients are miscalculated leading to a larger variance and biased energy estimator. This work evaluates the usage of a robust covariance estimator, namely the Minimum Covariance Determinant (MCD) algorithm, to be applied in the OF design. The goal of the MCD estimator is to find a number of observations whose classical covariance matrix has the lowest determinant. Hence, this procedure avoids taking into account low likelihood events to describe the background. It is worth mentioning that the background covariance matrix as well as the OF coefficients for each TileCal channel are computed offline and stored for both online and offline use. In order to evaluate the impact of the MCD estimator on the performance of the OF, simulated data sets were used. Different average numbers of interactions per bunch crossing and bunch spacings were tested. The results show that the estimation of the background covariance matrix through MCD improves significantly the final energy resolution with respect to the identity matrix which is currently used. Particularly, for high occupancy cells, the final energy resolution is improved by more than 20%. Moreover, the use of the classical covariance matrix degrades the energy resolution for the majority of TileCal cells.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
Al-Herrawy, Ahmad Z; Gad, Mahmoud A
2017-01-01
The aim of this study was to compare between slow and rapid sand filters for the removal of free-living amoebae during drinking water treatment production. Overall, 48 water samples were collected from two drinking water treatment plants having two different filtration systems (slow and rapid sand filters) and from inlet and outlet of each plant. Water samples were collected from Fayoum Drinking Water and Wastewater Holding Company, Egypt, during the year 2015. They were processed for detection of FLAs using non-nutrient agar (NNA). The isolates of FLAs were microscopically identified to the genus level based on the morphologic criteria and molecularly confirmed by the aid of PCR using genus-specific primers. The percentage of removal for FLAs through different treatment processes reached its highest rate in the station using slow sand filters (83%), while the removal by rapid sand filter system was 71.4%. Statistically, there was no significant difference ( P =0.55) for the removal of FLAs between the two different drinking water treatment systems. Statistically, seasons had no significant effect on the prevalence of FLAs in the two different drinking water treatment plants. Morphological identification of the isolated FLAs showed the presence of 3 genera namely Acanthamoeba , Naegleria , and Vermamoeba ( Hartmannella ) confirmed by PCR. The appearance of FLAs especially pathogenic amoebae in completely treated drinking water may cause potential health threat although there is no statistical difference between the two examined drinking water filtration systems.
Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.
1995-01-01
Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
Glass cylindrical filter for electrolysis cell
NASA Astrophysics Data System (ADS)
Abe, Shinichi; Akiyama, Fuminori
1992-09-01
Some electrolysis requires separation of electrolytic solution by a filter between two electrodes in order to prevent products from reacting secondarily at another electrode. These filters are usually made of a glass filter or ion exchanger membrane, and they are fixed at the electrolysis cell or cover one electrode. This report presents a detachable glass cylindrical filter for electrolytic reaction. The glass cylindrical filter was made from glass filter powder placed in a mold and heated at 800 C for 18 minutes. Using this filter, electrolytic reduction of carbon dioxide was performed in 0 C hot water with benzoin. This reaction produces aqueous oil from carbon dioxide and water. The products were compared with and without the filter and, although the yield did not differ between the two reaction systems, products without the filter contained highly polymerized oil compared to those with the filter. This suggests that the aqueous oil was produced at the cathode and polymerized at the anode.
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
Thin film filter lifetesting results in the extreme ultraviolet
NASA Technical Reports Server (NTRS)
Vedder, P. W.; Vallerga, J. V.; Gibson, J. L.; Stock, J.; Siegmund, O. H. W.
1993-01-01
We present the results of the thin film filter lifetesting program conducted as part of the NASA Extreme Ultraviolet Explorer (EUVE) satellite mission. This lifetesting program is designed to monitor changes in the transmission and mechanical properties of the EUVE filters over the lifetime of the mission (fabrication, assembly, launch and operation). Witness test filters were fabricated from thin film foils identical to those used in the flight filters. The witness filters have been examined and calibrated periodically over the past seven years. The filters have been examined for evidence of pinholing, mechanical degradation, and oxidation. Absolute transmissions of the flight and witness filters have been measured in the extreme ultraviolet (EUV) over six orders of magnitude at numerous wavelengths using the Berkeley EUV Calibration Facility.
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
Predicting water filter and bottled water use in Appalachia: a community-scale case study.
Levêque, Jonas G; Burns, Robert C
2017-06-01
A questionnaire survey was conducted in order to assess residents' perceptions of water quality for drinking and recreational purposes in a mid-sized city in northcentral West Virginia. Two logistic regression analyses were conducted in order to investigate the factors that influence bottle use and filter use. Results show that 37% of respondents primarily use bottled water and that 58% use a household filter when drinking from the tap. Respondents with lower levels of environmental concern, education levels, and lower organoleptic perceptions were most likely to perceive health risks from tap water consumption, and were most likely to use bottled water. Income, age, and organoleptic perceptions were predictors of water filter use among respondents. Clean water for recreational purposes was not found to be significant with either of these models. Our results demonstrate that bottle use and filter use are explained differently. We argue that more education and better communication about local tap water quality would decrease the use of bottled water. We demonstrate that household filters could be used as an alternative to bottled water.
Brazilian academic search filter: application to the scientific literature on physical activity.
Sanz-Valero, Javier; Ferreira, Marcos Santos; Castiel, Luis David; Wanden-Berghe, Carmina; Guilam, Maria Cristina Rodrigues
2010-10-01
To develop a search filter in order to retrieve scientific publications on physical activity from Brazilian academic institutions. The academic search filter consisted of the descriptor "exercise" associated through the term AND, to the names of the respective academic institutions, which were connected by the term OR. The MEDLINE search was performed with PubMed on 11/16/2008. The institutions were selected according to the classification from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for interuniversity agreements. A total of 407 references were retrieved, corresponding to about 0.9% of all articles about physical activity and 0.5% of the Brazilian academic publications indexed in MEDLINE on the search date. When compared with the manual search undertaken, the search filter (descriptor + institutional filter) showed a sensitivity of 99% and a specificity of 100%. The institutional search filter showed high sensitivity and specificity, and is applicable to other areas of knowledge in health sciences. It is desirable that every Brazilian academic institution establish its "standard name/brand" in order to efficiently retrieve their scientific literature.
The Effect of Pulse Shaping QPSK on Bandwidth Efficiency
NASA Technical Reports Server (NTRS)
Purba, Josua Bisuk Mubyarto; Horan, Shelia
1997-01-01
This research investigates the effect of pulse shaping QPSK on bandwidth efficiency over a non-linear channel. This investigation will include software simulations and the hardware implementation. Three kinds of filters: the 5th order Butterworth filter, the 3rd order Bessel filter and the Square Root Raised Cosine filter with a roll off factor (alpha) of 0.25,0.5 and 1, have been investigated as pulse shaping filters. Two different high power amplifiers, one a Traveling Wave Tube Amplifier (TWTA) and the other a Solid State Power Amplifier (SSPA) have been investigated in the hardware implementation. A significant improvement in the bandwidth utilization (rho) for the filtered data compared to unfiltered data through the non-linear channel is shown in the results. This method promises strong performance gains in a bandlimited channel when compared to unfiltered systems. This work was conducted at NMSU in the Center for Space Telemetering, and Telecommunications Systems in the Klipsch School of Electrical and Computer Engineering Department and is supported by a grant from the National Aeronautics and Space Administration (NASA) NAG5-1491.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
NASA Astrophysics Data System (ADS)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2018-05-01
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.
Design Characteristics and Tobacco Metal Concentrations in Filtered Cigars.
Caruso, Rosalie V; O'Connor, Richard J; Travers, Mark J; Delnevo, Cristine D; Stephens, W Edryd
2015-11-01
While U.S. cigarette consumption has declined, cigar use has steadily increased, for reasons including price compared to cigarettes and the availability of filtered varieties resembling cigarettes, and flavors that have been banned in cigarettes (excluding menthol). Little published data exists on the design characteristics of such cigars. A variety of filtered cigar brands were tested for design characteristics such as whole cigar weight, ventilation, and per-cigar tobacco weight. Cigar sticks were then sent to the University of St. Andrews for metal concentration testing of As, Pb, Cr, Ni, and Cd. Large and small cigars were statistically different between cigar weight (p ≤ .001), per-cigar tobacco weight (p = .001), rod diameter (p = .006), and filter diameter (p = .012). The differences in mean ventilation (overall mean = 19.6%, min. = 0.84%, max. = 57.6%) across filtered cigar brands were found to be statistically significant (p = .031), and can be compared to the ventilation of the average of 2013 U.S. Marlboro Red, Gold, and Silver packs at 29% ventilation. There were no significant differences for metal concentrations between cigar types (p = .650), with Pb and As levels being similar to U.S. 2009 cigarette concentrations, Cd cigar levels being slightly higher, and Cr and Ni levels much lower than cigarette levels. With cigar use rising, and filtered cigars displaying substantial similarities to filtered cigarettes, more research on product characteristics is warranted. Future plans include testing tobacco alkaloid and more observation of cigar weight for tax bracket purposes. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Computational tools for multi-linked flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.
1990-01-01
A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.
Predicting Wind Noise Inside Porous Dome Filters for Infrasound Sensing on Mars
NASA Astrophysics Data System (ADS)
Pitre, Kevin M.
The study described in this thesis aims to assess the effects of wind-generated noise on potential infrasound measurements on future Mars missions. Infrasonic sensing on Mars is being considered as a means to probe the long-scale atmospheric dynamics, thermal balance, and also to infer bolide impact statistics. In this study, a preliminary framework for predicting the principal wind noise mechanisms to the signal detected by a sensor placed inside a hemispherical porous dome on the Martian surface is developed. The method involves calculating the pressure power density spectra in the infrasonic range generated by turbulent interactions and filtered by dome shaped filters of varying porosities. Knowing the overall noise power spectrum will allow it to be subtracted from raw signals of interest and aid in the development of infrasound sensors for the Martian environment. In order to make these power spectral predictions, the study utilizes the Martian Climate Database (MCD) global circulation model, developed by Laboratoire de Meteorologie Dynamique in Paris, France. Velocity profiles are generated and used in semi empirical functions generated by von Karman along with equations for describing the physical turbulent interactions. With these, turbulent interactions in the free atmosphere above the Martian surface are described. For interactions of turbulence with the porous filter, semi-empirical formulations are adapted to the Martian parameters generated by the MCD and plotted alongside contributions in the free atmosphere outside and inside the dome to obtain the total wind noise contribution from turbulence. In conclusion, the plots of power spectral densities versus frequency are analyzed to determine what porosity filter would provide the best wind-noise suppression when measured at the center the dome. The study shows that 55% (0.02 to 5 Hz) and 80% (6 to 20 Hz) porosities prove to be the better of the five porosities tested.
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
A semiempirical linear model of indirect, flat-panel x-ray detectors.
Huang, Shih-Ying; Yang, Kai; Abbey, Craig K; Boone, John M
2012-04-01
It is important to understand signal and noise transfer in the indirect, flat-panel x-ray detector when developing and optimizing imaging systems. For optimization where simulating images is necessary, this study introduces a semiempirical model to simulate projection images with user-defined x-ray fluence interaction. The signal and noise transfer in the indirect, flat-panel x-ray detectors is characterized by statistics consistent with energy-integration of x-ray photons. For an incident x-ray spectrum, x-ray photons are attenuated and absorbed in the x-ray scintillator to produce light photons, which are coupled to photodiodes for signal readout. The signal mean and variance are linearly related to the energy-integrated x-ray spectrum by empirically determined factors. With the known first- and second-order statistics, images can be simulated by incorporating multipixel signal statistics and the modulation transfer function of the imaging system. To estimate the semiempirical input to this model, 500 projection images (using an indirect, flat-panel x-ray detector in the breast CT system) were acquired with 50-100 kilovolt (kV) x-ray spectra filtered with 0.1-mm tin (Sn), 0.2-mm copper (Cu), 1.5-mm aluminum (Al), or 0.05-mm silver (Ag). The signal mean and variance of each detector element and the noise power spectra (NPS) were calculated and incorporated into this model for accuracy. Additionally, the modulation transfer function of the detector system was physically measured and incorporated in the image simulation steps. For validation purposes, simulated and measured projection images of air scans were compared using 40 kV∕0.1-mm Sn, 65 kV∕0.2-mm Cu, 85 kV∕1.5-mm Al, and 95 kV∕0.05-mm Ag. The linear relationship between the measured signal statistics and the energy-integrated x-ray spectrum was confirmed and incorporated into the model. The signal mean and variance factors were linearly related to kV for each filter material (r(2) of signal mean to kV: 0.91, 0.93, 0.86, and 0.99 for 0.1-mm Sn, 0.2-mm Cu, 1.5-mm Al, and 0.05-mm Ag, respectively; r(2) of signal variance to kV: 0.99 for all four filters). The comparison of the signal and noise (mean, variance, and NPS) between the simulated and measured air scan images suggested that this model was reasonable in predicting accurate signal statistics of air scan images using absolute percent error. Overall, the model was found to be accurate in estimating signal statistics and spatial correlation between the detector elements of the images acquired with indirect, flat-panel x-ray detectors. The semiempirical linear model of the indirect, flat-panel x-ray detectors was described and validated with images of air scans. The model was found to be a useful tool in understanding the signal and noise transfer within indirect, flat-panel x-ray detector systems.
Approximate bandpass and frequency response models of the difference of Gaussian filter
NASA Astrophysics Data System (ADS)
Birch, Philip; Mitra, Bhargav; Bangalore, Nagachetan M.; Rehman, Saad; Young, Rupert; Chatwin, Chris
2010-12-01
The Difference of Gaussian (DOG) filter is widely used in optics and image processing as, among other things, an edge detection and correlation filter. It has important biological applications and appears to be part of the mammalian vision system. In this paper we analyse the filter and provide details of the full width half maximum, bandwidth and frequency response in order to aid the full characterisation of its performance.
NASA Astrophysics Data System (ADS)
Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.
2013-12-01
Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
NASA Astrophysics Data System (ADS)
Gong, W.; Meyer, F. J.
2013-12-01
It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit errors and the master atmospheric delays are first removed in a pre-processing step before the atmospheric filters are applied. The first adaptive filter type is using a filter kernel of Gaussian shape and is adaptively adjusting the width (defined in days) of this filter until the correlation of extracted and modeled atmospheric signal power is maximized. If atmospheric properties vary along the time series, this approach will lead to filter setting that are adapted to best reproduce atmospheric conditions at a certain observation epoch. Despite the superior performance of this first filter design, its Gaussian shape imposes non-physical relative weights onto acquisitions that ignore the known atmospheric noise in the data. Hence, in our second approach we are using atmospheric a-priori information to adaptively define the full shape of the atmospheric filter. For this process, we use a so-called normalized convolution (NC) approach that is often used in image reconstruction. Several NC designs will be presented in this paper and studied for relative performance. A cross-validation of all developed algorithms was done using both synthetic and real data. This validation showed designed filters are outperforming conventional filter methods that particularly useful for regions with limited data coverage or lack of a deformation field prior.
NASA Astrophysics Data System (ADS)
Kleinherenbrink, Marcel; Riva, Riccardo; Sun, Yu
2016-11-01
In this study, for the first time, an attempt is made to close the sea level budget on a sub-basin scale in terms of trend and amplitude of the annual cycle. We also compare the residual time series after removing the trend, the semiannual and the annual signals. To obtain errors for altimetry and Argo, full variance-covariance matrices are computed using correlation functions and their errors are fully propagated. For altimetry, we apply a geographically dependent intermission bias [Ablain et al.(2015)], which leads to differences in trends up to 0.8 mm yr-1. Since Argo float measurements are non-homogeneously spaced, steric sea levels are first objectively interpolated onto a grid before averaging. For the Gravity Recovery And Climate Experiment (GRACE), gravity fields full variance-covariance matrices are used to propagate errors and statistically filter the gravity fields. We use four different filtered gravity field solutions and determine which post-processing strategy is best for budget closure. As a reference, the standard 96 degree Dense Decorrelation Kernel-5 (DDK5)-filtered Center for Space Research (CSR) solution is used to compute the mass component (MC). A comparison is made with two anisotropic Wiener-filtered CSR solutions up to degree and order 60 and 96 and a Wiener-filtered 90 degree ITSG solution. Budgets are computed for 10 polygons in the North Atlantic Ocean, defined in a way that the error on the trend of the MC plus steric sea level remains within 1 mm yr-1. Using the anisotropic Wiener filter on CSR gravity fields expanded up to spherical harmonic degree 96, it is possible to close the sea level budget in 9 of 10 sub-basins in terms of trend. Wiener-filtered Institute of Theoretical geodesy and Satellite Geodesy (ITSG) and the standard DDK5-filtered CSR solutions also close the trend budget if a glacial isostatic adjustment (GIA) correction error of 10-20 % is applied; however, the performance of the DDK5-filtered solution strongly depends on the orientation of the polygon due to residual striping. In 7 of 10 sub-basins, the budget of the annual cycle is closed, using the DDK5-filtered CSR or the Wiener-filtered ITSG solutions. The Wiener-filtered 60 and 96 degree CSR solutions, in combination with Argo, lack amplitude and suffer from what appears to be hydrological leakage in the Amazon and Sahel regions. After reducing the trend, the semiannual and the annual signals, 24-53 % of the residual variance in altimetry-derived sea level time series is explained by the combination of Argo steric sea levels and the Wiener-filtered ITSG MC. Based on this, we believe that the best overall solution for the MC of the sub-basin-scale budgets is the Wiener-filtered ITSG gravity fields. The interannual variability is primarily a steric signal in the North Atlantic Ocean, so for this the choice of filter and gravity field solution is not really significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099
2015-03-15
A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less
NASA Astrophysics Data System (ADS)
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) for spaceborne measurements of CO
NASA Astrophysics Data System (ADS)
Johnson, Brian R.; Kampe, Thomas U.; Cook, William B.; Miecznik, Grzegorz; Novelli, Paul C.; Snell, Hilary E.; Turner-Valle, Jennifer A.
2003-11-01
An instrument concept for an Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) has been developed for measuring tropospheric carbon monoxide (CO) from space. The concept is based upon a correlation technique similar in nature to multi-order Fabry-Perot (FP) interferometer or gas filter radiometer techniques, which simultaneously measure atmospheric emission from several infrared vibration-rotation lines of CO. Correlation techniques provide a multiplex advantage for increased throughput, high spectral resolution and selectivity necessary for profiling tropospheric CO. Use of unconventional multilayer interference filter designs leads to improvement in CO spectral line correlation compared with the traditional FP multi-order technique, approaching the theoretical performance of gas filter correlation radiometry. In this implementation, however, the gas cell is replaced with a simple, robust solid interference filter. In addition to measuring CO, the correlation filter technique can be applied to measurements of other important gases such as carbon dioxide, nitrous oxide and methane. Imaging the scene onto a 2-D detector array enables a limited range of spectral sampling owing to the field-angle dependence of the filter transmission function. An innovative anamorphic optical system provides a relatively large instrument field-of-view for imaging along the orthogonal direction across the detector array. An important advantage of the IMOFPS concept is that it is a small, low mass and high spectral resolution spectrometer having no moving parts. A small, correlation spectrometer like IMOFPS would be well suited for global observations of CO2, CO, and CH4 from low Earth or regional observations from Geostationary orbit. A prototype instrument is in development for flight demonstration on an airborne platform with potential applications to atmospheric chemistry, wild fire and biomass burning, and chemical dispersion monitoring.
NASA Astrophysics Data System (ADS)
Jeong, Jeong-Won; Kim, Tae-Seong; Shin, Dae-Chul; Do, Synho; Marmarelis, Vasilis Z.
2004-04-01
Recently it was shown that soft tissue can be differentiated with spectral unmixing and detection methods that utilize multi-band information obtained from a High-Resolution Ultrasonic Transmission Tomography (HUTT) system. In this study, we focus on tissue differentiation using the spectral target detection method based on Constrained Energy Minimization (CEM). We have developed a new tissue differentiation method called "CEM filter bank". Statistical inference on the output of each CEM filter of a filter bank is used to make a decision based on the maximum statistical significance rather than the magnitude of each CEM filter output. We validate this method through 3-D inter/intra-phantom soft tissue classification where target profiles obtained from an arbitrary single slice are used for differentiation in multiple tomographic slices. Also spectral coherence between target and object profiles of an identical tissue at different slices and phantoms is evaluated by conventional cross-correlation analysis. The performance of the proposed classifier is assessed using Receiver Operating Characteristic (ROC) analysis. Finally we apply our method to classify tiny structures inside a beef kidney such as Styrofoam balls (~1mm), chicken tissue (~5mm), and vessel-duct structures.
NASA Astrophysics Data System (ADS)
Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David
2017-07-01
The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno-Bote, Ruben; Parga, Nestor; Center for Theoretical Neuroscience, Center for Neurobiology and Behavior, Columbia University, New York 10032-2695
2006-01-20
An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced previously by Moreno-Bote and Parga [Phys. Rev. Lett. 92, 028102 (2004)]. These two functions define the firing variability and firing synchronization between neurons, and aremore » of much importance for understanding neuron communication.« less
Studies of EGRET sources with a novel image restoration technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi
2007-07-12
We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less
Clustering analysis for muon tomography data elaboration in the Muon Portal project
NASA Astrophysics Data System (ADS)
Bandieramonte, M.; Antonuccio-Delogu, V.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Riggi, S.; Sciacca, E.; Vitello, F.
2015-05-01
Clustering analysis is one of multivariate data analysis techniques which allows to gather statistical data units into groups, in order to minimize the logical distance within each group and to maximize the one between different groups. In these proceedings, the authors present a novel approach to the muontomography data analysis based on clustering algorithms. As a case study we present the Muon Portal project that aims to build and operate a dedicated particle detector for the inspection of harbor containers to hinder the smuggling of nuclear materials. Clustering techniques, working directly on scattering points, help to detect the presence of suspicious items inside the container, acting, as it will be shown, as a filter for a preliminary analysis of the data.
In-network processing of joins in wireless sensor networks.
Kang, Hyunchul
2013-03-11
The join or correlated filtering of sensor readings is one of the fundamental query operations in wireless sensor networks (WSNs). Although the join in centralized or distributed databases is a well-researched problem, join processing in WSNs has quite different characteristics and is much more difficult to perform due to the lack of statistics on sensor readings and the resource constraints of sensor nodes. Since data transmission is orders of magnitude more costly than processing at a sensor node, in-network processing of joins is essential. In this paper, the state-of-the-art techniques for join implementation in WSNs are surveyed. The requirements and challenges, join types, and components of join implementation are described. The open issues for further research are identified.
In-Network Processing of Joins in Wireless Sensor Networks
Kang, Hyunchul
2013-01-01
The join or correlated filtering of sensor readings is one of the fundamental query operations in wireless sensor networks (WSNs). Although the join in centralized or distributed databases is a well-researched problem, join processing in WSNs has quite different characteristics and is much more difficult to perform due to the lack of statistics on sensor readings and the resource constraints of sensor nodes. Since data transmission is orders of magnitude more costly than processing at a sensor node, in-network processing of joins is essential. In this paper, the state-of-the-art techniques for join implementation in WSNs are surveyed. The requirements and challenges, join types, and components of join implementation are described. The open issues for further research are identified. PMID:23478603
Digital Filters for Digital Phase-locked Loops
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1985-01-01
An s/z hybrid model for a general phase locked loop is proposed. The impact of the loop filter on the stability, gain margin, noise equivalent bandwidth, steady state error and time response is investigated. A specific digital filter is selected which maximizes the overall gain margin of the loop. This filter can have any desired number of integrators. Three integrators are sufficient in order to track a phase jerk with zero steady state error at loop update instants. This filter has one zero near z = 1.0 for each integrator. The total number of poles of the filter is equal to the number of integrators plus two.
Compact Focal Plane Assembly for Planetary Science
NASA Technical Reports Server (NTRS)
Brown, Ari; Aslam, Shahid; Huang, Wei-Chung; Steptoe-Jackson, Rosalind
2013-01-01
A compact radiometric focal plane assembly (FPA) has been designed in which the filters are individually co-registered over compact thermopile pixels. This allows for construction of an ultralightweight and compact radiometric instrument. The FPA also incorporates micromachined baffles in order to mitigate crosstalk and low-pass filter windows in order to eliminate high-frequency radiation. Compact metal mesh bandpass filters were fabricated for the far infrared (FIR) spectral range (17 to 100 microns), a game-changing technology for future planetary FIR instruments. This fabrication approach allows the dimensions of individual metal mesh filters to be tailored with better than 10- micron precision. In contrast, conventional compact filters employed in recent missions and in near-term instruments consist of large filter sheets manually cut into much smaller pieces, which is a much less precise and much more labor-intensive, expensive, and difficult process. Filter performance was validated by integrating them with thermopile arrays. Demonstration of the FPA will require the integration of two technologies. The first technology is compact, lightweight, robust against cryogenic thermal cycling, and radiation-hard micromachined bandpass filters. They consist of a copper mesh supported on a deep reactive ion-etched silicon frame. This design architecture is advantageous when constructing a lightweight and compact instrument because (1) the frame acts like a jig and facilitates filter integration with the FPA, (2) the frame can be designed so as to maximize the FPA field of view, (3) the frame can be simultaneously used as a baffle for mitigating crosstalk, and (4) micron-scale alignment features can be patterned so as to permit high-precision filter stacking and, consequently, increase the filter bandwidth and sharpen the out-of-band rolloff. The second technology consists of leveraging, from another project, compact and lightweight Bi0.87Sb0.13/Sb arrayed thermopiles. These detectors consist of 30-layer thermopiles deposited in series upon a silicon nitride membrane. At 300 K, the thermopile arrays are highly linear over many orders of magnitude of incident IR power, and have a reported specific detectivity that exceeds the requirements imposed on future mission concepts. The bandpass filter array board is integrated with a thermopile array board by mounting both boards on a machined aluminum jig.
A decoupled approach to filter design for stochastic systems
NASA Astrophysics Data System (ADS)
Barbata, A.; Zasadzinski, M.; Ali, H. Souley; Messaoud, H.
2016-08-01
This paper presents a new theorem to guarantee the almost sure exponential stability for a class of stochastic triangular systems by studying only the stability of each diagonal subsystems. This result allows to solve the filtering problem of the stochastic systems with multiplicative noises by using the almost sure exponential stability concept. Two kinds of observers are treated: the full-order and reduced-order cases.
Printed Graphene Derivative Circuits as Passive Electrical Filters
Sinar, Dogan
2018-01-01
The objective of this study is to inkjet print resistor-capacitor (RC) low pass electrical filters, using a novel water-based cellulose graphene ink, and compare the voltage-frequency and transient behavior to equivalent circuits constructed from discrete passive components. The synthesized non-toxic graphene-carboxymethyl cellulose (G-CMC) ink is deposited on mechanically flexible polyimide substrates using a customized printer that dispenses functionalized aqueous solutions. The design of the printed first-order and second-order low-pass RC filters incorporate resistive traces and interdigitated capacitors. Low pass filter characteristics, such as time constant, cut-off frequency and roll-off rate, are determined for comparative analysis. Experiments demonstrate that for low frequency applications (<100 kHz) the printed graphene derivative circuits performed as well as the circuits constructed from discrete resistors and capacitors for both low pass filter and RC integrator applications. The impact of mechanical stress due to bending on the electrical performance of the flexible printed circuits is also investigated. PMID:29473890
Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.
2004-01-01
This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.
Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang
2017-11-01
The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ruchkinova, O.; Shchuckin, I.
2017-06-01
Its proved, that phytofilters are environmental friendly solution of problem of purification of surface plate from urbanized territories. Phytofilters answer the nowadays purposes to systems of purification of land drainage. The main problem of it is restrictions, connecter with its use in the conditions of cold temperature. Manufactured a technology and mechanism, which provide a whole-year purification of surface plate and its storage. Experimentally stated optimal makeup of filtering load: peat, zeolite and sand in per cent of volume, which provides defined hydraulic characteristics. Stated sorbate and ion-selective volume of complex filtering load of ordered composition in dynamic conditions. Estimated dependences of exit concentrations of oil products and heavy metals on temperature by filtering through complex filtering load of ordered composition. Defined effectiveness of purification at phytofiltering installation. Fixed an influence of embryophytes on process of phytogeneration and capacity of filtering load. Recommended swamp iris, mace reed and reed grass. Manufactured phytofilter calculation methodology. Calculated economic effect from use of phytofiltration technology in comparison with traditional block-modular installations.
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Serzhantov, A. M.; Bal'va, Ya. F.; Leksikov, An. A.; Galeev, R. G.
2015-05-01
A microstrip bandpass filter of new design based on original resonators with an interdigital structure of conductors has been studied. The proposed filters of third to sixth order are distinguished for their high frequency-selective properties and much smaller size than analogs. It is established that a broad stop band, extending up to a sixfold central bandpass frequency, is determined by low unloaded Q of higher resonance mode and weak coupling of resonators in the pass band. It is shown for the first time that, as the spacing of interdigital stripe conductors decreases, the Q of higher resonance mode monotonically drops, while the Q value for the first operating mode remains high. A prototype fourth-order filter with a central frequency of 0.9 GHz manufactured on a ceramic substrate with dielectric permittivity ɛ = 80 has microstrip topology dimensions of 9.5 × 4.6 × 1 mm3. The electrodynamic 3D model simulations of the filter characteristics agree well with the results of measurements.
Printed Graphene Derivative Circuits as Passive Electrical Filters.
Sinar, Dogan; Knopf, George K
2018-02-23
The objective of this study is to inkjet print resistor-capacitor ( RC ) low pass electrical filters, using a novel water-based cellulose graphene ink, and compare the voltage-frequency and transient behavior to equivalent circuits constructed from discrete passive components. The synthesized non-toxic graphene-carboxymethyl cellulose (G-CMC) ink is deposited on mechanically flexible polyimide substrates using a customized printer that dispenses functionalized aqueous solutions. The design of the printed first-order and second-order low-pass RC filters incorporate resistive traces and interdigitated capacitors. Low pass filter characteristics, such as time constant, cut-off frequency and roll-off rate, are determined for comparative analysis. Experiments demonstrate that for low frequency applications (<100 kHz) the printed graphene derivative circuits performed as well as the circuits constructed from discrete resistors and capacitors for both low pass filter and RC integrator applications. The impact of mechanical stress due to bending on the electrical performance of the flexible printed circuits is also investigated.
75 FR 16732 - Action Affecting Export Privileges; Aqua-Loop Cooling Towers, Co.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... Regulations by facilitating or coordinating the export of approximately 174 rolls of hog hair filter media... about September 28, 2004, Aqua-Loop ordered or financed approximately 174 rolls of hog hair filter media... coordinating the export of approximately 185 rolls of hog hair filter media, part number HHB6O 130 and valued...
A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.
Ligorio, Gabriele; Sabatini, Angelo M
2015-08-01
Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.
Lawson, Richard S; White, Duncan; Cade, Sarah C; Hall, David O; Kenny, Bob; Knight, Andy; Livieratos, Lefteris; Nijran, Kuldip
2013-08-01
The Nuclear Medicine Software Quality Group of the Institute of Physics and Engineering in Medicine has conducted an audit to compare the ways in which different manufacturers implement the filters used in single-photon emission computed tomography. The aim of the audit was to identify differences between manufacturers' implementations of the same filter and to find means for converting parameters between systems. Computer-generated data representing projection images of an ideal test object were processed using seven different commercial nuclear medicine systems. Images were reconstructed using filtered back projection and a Butter worth filter with three different cutoff frequencies and three different orders. The audit found large variations between the frequency-response curves of what were ostensibly the same filters on different systems. The differences were greater than could be explained simply by different Butter worth formulae. Measured cutoff frequencies varied between 40 and 180% of that expected. There was also occasional confusion with respect to frequency units. The audit concluded that the practical implementation of filtering, such as the size of the kernel, has a profound effect on the results, producing large differences between systems. Nevertheless, this work shows how users can quantify the frequency response of their own systems so that it will be possible to compare two systems in order to find filter parameters on each that produce equivalent results. These findings will also make it easier for users to replicate filters similar to other published results, even if they are using a different computer system.
The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang
2018-02-01
The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
NASA Technical Reports Server (NTRS)
Whitmore, S. A.
1985-01-01
The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
2017-04-01
complementary fusion: Fourth-order Butterworth filter was used to high -pass ocelli and low-pass optic flow. The normalized cutoff frequency had to be kept...information introduced by luminance change. The high - frequency cutoff was added to reject the flickering noise for indoor usage. The filtered signals from the...function of the low- pass filter is to attenuate high - frequency noise. The final band-pass filter transfer function is in Eq. 2. (()
Angle only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2011-09-01
We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berland, Kristian; Song, Xin; Carvalho, Patricia A.
Energy filtering has been suggested by many authors as a means to improve thermoelectric properties. The idea is to filter away low-energy charge carriers in order to increase Seebeck coefficient without compromising electronic conductivity. This concept was investigated in the present paper for a specific material (ZnSb) by a combination of first-principles atomic-scale calculations, Boltzmann transport theory, and experimental studies of the same system. The potential of filtering in this material was first quantified, and it was as an example found that the power factor could be enhanced by an order of magnitude when the filter barrier height was 0.5 eV.more » Measured values of the Hall carrier concentration in bulk ZnSb were then used to calibrate the transport calculations, and nanostructured ZnSb with average grain size around 70 nm was processed to achieve filtering as suggested previously in the literature. Various scattering mechanisms were employed in the transport calculations and compared with the measured transport properties in nanostructured ZnSb as a function of temperature. Reasonable correspondence between theory and experiment could be achieved when a combination of constant lifetime scattering and energy filtering with a 0.25 eV barrier was employed. However, the difference between bulk and nanostructured samples was not sufficient to justify the introduction of an energy filtering mechanism. The reasons for this and possibilities to achieve filtering were discussed in the paper.« less
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Linear Phase Sharp Transition BPF to Detect Noninvasive Maternal and Fetal Heart Rate.
Marchon, Niyan; Naik, Gourish; Pai, K R
2018-01-01
Fetal heart rate (FHR) detection can be monitored using either direct fetal scalp electrode recording (invasive) or by indirect noninvasive technique. Weeks before delivery, the invasive method poses a risk factor to the fetus, while the latter provides accurate fetal ECG (FECG) information which can help diagnose fetal's well-being. Our technique employs variable order linear phase sharp transition (LPST) FIR band-pass filter which shows improved stopband attenuation at higher filter orders. The fetal frequency fiduciary edges form the band edges of the filter characterized by varying amounts of overlap of maternal ECG (MECG) spectrum. The one with the minimum maternal spectrum overlap was found to be optimum with no power line interference and maximum fetal heart beats being detected. The improved filtering is reflected in the enhancement of the performance of the fetal QRS detector (FQRS). The improvement has also occurred in fetal heart rate obtained using our algorithm which is in close agreement with the true reference (i.e., invasive fetal scalp ECG). The performance parameters of the FQRS detector such as sensitivity (Se), positive predictive value (PPV), and accuracy (F 1 ) were found to improve even for lower filter order. The same technique was extended to evaluate maternal QRS detector (MQRS) and found to yield satisfactory maternal heart rate (MHR) results.
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
A general transfer-function approach to noise filtering in open-loop quantum control
NASA Astrophysics Data System (ADS)
Viola, Lorenza
2015-03-01
Hamiltonian engineering via unitary open-loop quantum control provides a versatile and experimentally validated framework for manipulating a broad class of non-Markovian open quantum systems of interest, with applications ranging from dynamical decoupling and dynamically corrected quantum gates, to noise spectroscopy and quantum simulation. In this context, transfer-function techniques directly motivated by control engineering have proved invaluable for obtaining a transparent picture of the controlled dynamics in the frequency domain and for quantitatively analyzing performance. In this talk, I will show how to identify a computationally tractable set of ``fundamental filter functions,'' out of which arbitrary filter functions may be assembled up to arbitrary high order in principle. Besides avoiding the infinite recursive hierarchy of filter functions that arises in general control scenarios, this fundamental set suffices to characterize the error suppression capabilities of the control protocol in both the time and frequency domain. I will show, in particular, how the resulting notion of ``filtering order'' reveals conceptually distinct, albeit complementary, features of the controlled dynamics as compared to the ``cancellation order,'' traditionally defined in the Magnus sense. Implications for current quantum control experiments will be discussed. Work supported by the U.S. Army Research Office under Contract No. W911NF-14-1-0682.
Cassista, Julianne; Payne-Gagnon, Julie; Martel, Brigitte; Gagnon, Marie-Pierre
2014-01-01
The manipulation of glass ampoules involves risk of particle contamination of parenteral medication, and the use of filter needles has often been recommended in order to reduce the number of particles in these solutions. This study aims to develop a theory-based intervention to increase nurse intention to use filter needles according to clinical guideline recommendations produced by a large university medical centre in Quebec (Canada). Using the Intervention Mapping framework, we first identified the psychosocial determinants of nurse intention to use filter needles according to these recommendations. Second, we developed and implemented an intervention targeting nurses from five care units in order to increase their intention to adhere to recommendations on the use of filter needles. We also assessed nurse satisfaction with the intervention. In total, 270 nurses received the intervention and 169 completed the posttest questionnaire. The two determinants of intention, that is, attitude and perceived behavioral control, were significantly higher after the intervention, but only perceived behavioral control remained a predictor of intention. In general, nurses were highly satisfied with the intervention. This study provides support for the use of Intervention Mapping to develop, implement, and evaluate theory-based interventions in order to improve healthcare professional adherence to clinical recommendations.
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L. H.; Hu, C. L.; Yan, W. S.; Pennec, Yan; Hu, N.
2018-03-01
For the elastic SV (transverse) waves in metals, a high-quality narrow passband filter that consists of aligned parallel thin plates with small gaps is designed. In order to obtain a good performance, the thin plates should be constituted by materials with a smaller mass density and Young's modulus, such as polymethylmethacrylate (PMMA), compared to the embedded materials in which the elastic SV waves propagate. Both the theoretical model and the full numerical simulation show that the transmission spectrum of the designed filter demonstrates several peaks with flawless transmission within 0 KHz ˜20 KHz frequency range. The peaks can be readily tuned by manipulating the geometrical parameters of the plates. Therefore, the current design works well for both low and high frequencies with a controllable size. Even for low frequencies on the order of kilohertz, the size of this filter can be still limited to the order of centimeters, which significantly benefits the real applications. The investigation also finds that the same filter is valid when using different metals and the reason behind this is explained theoretically. Additionally, the effect of bonding conditions of interfaces between thin plates and the base material is investigated using a spring model.
On the Relation Between Facular Bright Points and the Magnetic Field
NASA Astrophysics Data System (ADS)
Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran
1994-12-01
Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross--comparisons compiled from measurements of over two-thousand bright point structures are presented. Preliminary analysis of the time evolution of bright points in the G--band reveals that the dominant mode of bright point evolution is fission of larger structures into smaller ones and fusion of small structures into conglomerate structures. The characteristic time scale for the fission/fusion process is on the order of minutes.
Properties of the Residual Stress of the Temporally Filtered Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Pruett, C. D.; Gatski, T. B.; Grosch, C. E.; Thacker, W. D.
2002-01-01
The development of a unifying framework among direct numerical simulations, large-eddy simulations, and statistically averaged formulations of the Navier-Stokes equations, is of current interest. Toward that goal, the properties of the residual (subgrid-scale) stress of the temporally filtered Navier-Stokes equations are carefully examined. Causal time-domain filters, parameterized by a temporal filter width 0 less than Delta less than infinity, are considered. For several reasons, the differential forms of such filters are preferred to their corresponding integral forms; among these, storage requirements for differential forms are typically much less than for integral forms and, for some filters, are independent of Delta. The behavior of the residual stress in the limits of both vanishing and in infinite filter widths is examined. It is shown analytically that, in the limit Delta to 0, the residual stress vanishes, in which case the Navier-Stokes equations are recovered from the temporally filtered equations. Alternately, in the limit Delta to infinity, the residual stress is equivalent to the long-time averaged stress, and the Reynolds-averaged Navier-Stokes equations are recovered from the temporally filtered equations. The predicted behavior at the asymptotic limits of filter width is further validated by numerical simulations of the temporally filtered forced, viscous Burger's equation. Finally, finite filter widths are also considered, and a priori analyses of temporal similarity and temporal approximate deconvolution models of the residual stress are conducted.
Spencer, Richard G
2010-09-01
A type of "matched filter" (MF), used extensively in the processing of one-dimensional spectra, is defined by multiplication of a free-induction decay (FID) by a decaying exponential with the same time constant as that of the FID. This maximizes, in a sense to be defined, the signal-to-noise ratio (SNR) in the spectrum obtained after Fourier transformation. However, a different entity known also as the matched filter was introduced by van Vleck in the context of pulse detection in the 1940's and has become widely integrated into signal processing practice. These two types of matched filters appear to be quite distinct. In the NMR case, the "filter", that is, the exponential multiplication, is defined by the characteristics of, and applied to, a time domain signal in order to achieve improved SNR in the spectral domain. In signal processing, the filter is defined by the characteristics of a signal in the spectral domain, and applied in order to improve the SNR in the temporal (pulse) domain. We reconcile these two distinct implementations of the matched filter, demonstrating that the NMR "matched filter" is a special case of the matched filter more rigorously defined in the signal processing literature. In addition, two limitations in the use of the MF are highlighted. First, application of the MF distorts resonance ratios as defined by amplitudes, although not as defined by areas. Second, the MF maximizes SNR with respect to resonance amplitude, while intensities are often more appropriately defined by areas. Maximizing the SNR with respect to area requires a somewhat different approach to matched filtering.
Chen, Jonathan H; Podchiyska, Tanya; Altman, Russ B
2016-03-01
To answer a "grand challenge" in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or Amazon.com's product recommender. EMR data were extracted from 1 year of hospitalizations (>18K patients with >5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the ∼1.5K most common items to drive an order recommender. The authors assessed the recommender's ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P < 10(-10)) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P < 10(-16)). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from "correct" ones. Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. "interesting" suggestions). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Falocchi, Marco; Giovannini, Lorenzo; Franceschi, Massimiliano de; Zardi, Dino
2018-05-01
We present a refinement of the recursive digital filter proposed by McMillen (Boundary-Layer Meteorol 43:231-245, 1988), for separating surface-layer turbulence from low-frequency fluctuations affecting the mean flow, especially over complex terrain. In fact, a straightforward application of the filter causes both an amplitude attenuation and a forward phase shift in the filtered signal. As a consequence turbulence fluctuations, evaluated as the difference between the original series and the filtered one, as well as higher-order moments calculated from them, may be affected by serious inaccuracies. The new algorithm (i) produces a rigorous zero-phase filter, (ii) restores the amplitude of the low-frequency signal, and (iii) corrects all filter-induced signal distortions.
Photonic crystal ring resonator based optical filters for photonic integrated circuits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, S., E-mail: mail2robinson@gmail.com
In this paper, a two Dimensional (2D) Photonic Crystal Ring Resonator (PCRR) based optical Filters namely Add Drop Filter, Bandpass Filter, and Bandstop Filter are designed for Photonic Integrated Circuits (PICs). The normalized output response of the filters is obtained using 2D Finite Difference Time Domain (FDTD) method and the band diagram of periodic and non-periodic structure is attained by Plane Wave Expansion (PWE) method. The size of the device is minimized from a scale of few tens of millimeters to the order of micrometers. The overall size of the filters is around 11.4 μm × 11.4 μm which ismore » highly suitable of photonic integrated circuits.« less
On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.
2004-01-01
The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.
Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger
2017-02-01
The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Seong-woo; Park, Young-cheol; Seo, Young-soo; Youn, Dae Hee
2014-12-01
In this paper, we propose a high-order lattice adaptive notch filter (LANF) that can robustly track multiple sinusoids. Unlike the conventional cascade structure, the proposed high-order LANF has robust tracking characteristics regardless of the frequencies of reference sinusoids and initial notch frequencies. The proposed high-order LANF is applied to a narrowband adaptive noise cancellation (ANC) to mitigate the effect of the broadband disturbance in the reference signal. By utilizing the gradient adaptive lattice (GAL) ANC algorithm and approximately combining it with the proposed high-order LANF, a computationally efficient narrowband ANC system is obtained. Experimental results demonstrate the robustness of the proposed high-order LANF and the effectiveness of the obtained narrowband ANC system.
NASA Astrophysics Data System (ADS)
Zhong, Ke; Lei, Xia; Li, Shaoqian
2013-12-01
Statistics-based intercarrier interference (ICI) mitigation algorithm is proposed for orthogonal frequency division multiplexing systems in presence of both nonstationary and stationary phase noises. By utilizing the statistics of phase noise, which can be obtained from measurements or data sheets, a Wiener filter preprocessing algorithm for ICI mitigation is proposed. The proposed algorithm can be regarded as a performance-improving technique for the previous researches on phase noise cancelation. Simulation results show that the proposed algorithm can effectively mitigate ICI and lower the error floor, and therefore significantly improve the performances of previous researches on phase noise cancelation, especially in the presence of severe phase noise.
NASA Technical Reports Server (NTRS)
Stahl, H. Philip (Inventor); Walker, Chanda Bartlett (Inventor)
2006-01-01
An achromatic shearing phase sensor generates an image indicative of at least one measure of alignment between two segments of a segmented telescope's mirrors. An optical grating receives at least a portion of irradiance originating at the segmented telescope in the form of a collimated beam and the collimated beam into a plurality of diffraction orders. Focusing optics separate and focus the diffraction orders. Filtering optics then filter the diffraction orders to generate a resultant set of diffraction orders that are modified. Imaging optics combine portions of the resultant set of diffraction orders to generate an interference pattern that is ultimately imaged by an imager.
Optimal causal filtering for 1 /fα-type noise in single-electrode EEG signals.
Paris, Alan; Atia, George; Vosoughi, Azadeh; Berman, Stephen A
2016-08-01
Understanding the mode of generation and the statistical structure of neurological noise is one of the central problems of biomedical signal processing. We have developed a broad class of abstract biological noise sources we call hidden simplicial tissues. In the simplest cases, such tissue emits what we have named generalized van der Ziel-McWhorter (GVZM) noise which has a roughly 1/fα spectral roll-off. Our previous work focused on the statistical structure of GVZM frequency spectra. However, causality of processing operations (i.e., dependence only on the past) is an essential requirement for real-time applications to seizure detection and brain-computer interfacing. In this paper we outline the theoretical background for optimal causal time-domain filtering of deterministic signals embedded in GVZM noise. We present some of our early findings concerning the optimal filtering of EEG signals for the detection of steady-state visual evoked potential (SSVEP) responses and indicate the next steps in our ongoing research.
Recursive least squares estimation and its application to shallow trench isolation
NASA Astrophysics Data System (ADS)
Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.
2003-06-01
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
Ecosystem responses to climate change at a Low Arctic and a High Arctic long-term research site
John E. Hobbie; Gaius R. Shaver; Edward B. Rastetter; Jessica E. Cherry; Scott J. Goetz; Kevin C. Guay; William A. Gould; George W. Kling
2017-01-01
Long-term measurements of ecological effects of warming are often not statistically significant because of annual variability or signal noise. These are reduced in indicators that filter or reduce the noise around the signal and allow effects of climate warming to emerge. In this way, certain indicators act as medium pass filters integrating the signal over years-to-...
High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
Pérez Zaballos, María Teresa; Ramos de Miguel, Ángel; Pérez Plasencia, Daniel; Zaballos González, María Luisa; Ramos Macías, Ángel
2015-12-01
To evaluate 1) if air traffic controllers (ATC) perform better than non-air traffic controllers in an open-set speech-in-noise test because of their experience with radio communications, and 2) if high-frequency information (>8000 Hz) substantially improves speech-in-noise perception across populations. The control group comprised 28 normal-hearing subjects, and the target group comprised 48 ATCs aged between 19 and 55 years who were native Spanish speakers. The hearing -in-noise abilities of the two groups were characterized under two signal conditions: 1) speech tokens and white noise sampled at 44.1 kHz (unfiltered condition) and 2) speech tokens plus white noise, each passed through a 4th order Butterworth filter with 70 and 8000 Hz low and high cutoffs (filtered condition). These tests were performed at signal-to-noise ratios of +5, 0, and -5-dB SNR. The ATCs outperformed the control group in all conditions. The differences were statistically significant in all cases, and the largest difference was observed under the most difficult conditions (-5 dB SNR). Overall, scores were higher when high-frequency components were not suppressed for both groups, although statistically significant differences were not observed for the control group at 0 dB SNR. The results indicate that ATCs are more capable of identifying speech in noise. This may be due to the effect of their training. On the other hand, performance seems to decrease when the high frequency components of speech are removed, regardless of training.
Advanced signal processing based on support vector regression for lidar applications
NASA Astrophysics Data System (ADS)
Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.
2015-10-01
The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.
Development of a Sigma-2 Receptor affinity filter through a Monte Carlo based QSAR analysis.
Rescifina, Antonio; Floresta, Giuseppe; Marrazzo, Agostino; Parenti, Carmela; Prezzavento, Orazio; Nastasi, Giovanni; Dichiara, Maria; Amata, Emanuele
2017-08-30
For the first time in sigma-2 (σ 2 ) receptor field, a quantitative structure-activity relationship (QSAR) model has been built using pK i values of the whole set of known selective σ 2 receptor ligands (548 compounds), taken from the Sigma-2 Receptor Selective Ligands Database (S2RSLDB) (http://www.researchdsf.unict.it/S2RSLDB/), through the Monte Carlo technique and employing the software CORAL. The model has been developed by using a large and structurally diverse set of compounds, allowing for a prediction of different populations of chemical compounds endpoint (σ 2 receptor pK i ). The statistical quality reached, suggested that model for pK i determination is robust and possesses a satisfactory predictive potential. The statistical quality is high for both visible and invisible sets. The screening of the FDA approved drugs, external to our dataset, suggested that sixteen compounds might be repositioned as σ 2 receptor ligands (predicted pK i ≥8). A literature check showed that six of these compounds have already been tested for affinity at σ 2 receptor and, of these, two (Flunarizine and Terbinafine) have shown an experimental σ 2 receptor pK i >7. This suggests that this QSAR model may be used as focusing screening filter in order to prospectively find or repurpose new drugs with high affinity for the σ 2 receptor, and overall allowing for an enhanced hit rate respect to a random screening. Copyright © 2017 Elsevier B.V. All rights reserved.
Study of the use of a nonlinear, rate limited, filter on pilot control signals
NASA Technical Reports Server (NTRS)
Adams, J. J.
1977-01-01
The use of a filter on the pilot's control output could improve the performance of the pilot-aircraft system. What is needed is a filter with a sharp high frequency cut-off, no resonance peak, and a minimum of lag at low frequencies. The present investigation studies the usefulness of a nonlinear, rate limited, filter in performing the needed function. The nonlinear filter is compared with a linear, first order filter, and no filter. An analytical study using pilot models and a simulation study using experienced test pilots was performed. The results showed that the nonlinear filter does promote quick, steady maneuvering. It is shown that the nonlinear filter attenuates the high frequency remnant and adds less phase lag to the low frequency signal than does the linear filter. It is also shown that the rate limit in the nonlinear filter can be set to be too restrictive, causing an unstable pilot-aircraft system response.
Simulation for noise cancellation using LMS adaptive filter
NASA Astrophysics Data System (ADS)
Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung
2017-06-01
In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
Study of different filtering techniques applied to spectra from airborne gamma spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilhelm, Emilien; Gutierrez, Sebastien; Reboli, Anne
2015-07-01
One of the features of spectra obtained by airborne gamma spectrometry is low counting statistics due to the short acquisition time (1 s) and the large source-detector distance (40 m). It leads to considerable uncertainty in radionuclide identification and determination of their respective activities from the windows method recommended by the IAEA, especially for low-level radioactivity. The present work compares the results obtained with filters in terms of errors of the filtered spectra with the window method and over the whole gamma energy range. The results are used to determine which filtering technique is the most suitable in combination withmore » some method for total stripping of the spectrum. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: In digital breast tomosynthesis (DBT) systems capable of digital mammography (DM), Al filters are used during DBT and K-edge filters during DM. The potential for standardizing the x-ray filters with Al, instead of K-edge filters, was investigated with intent to reduce exposure duration and to promote a simpler system design. Methods: Analytical computations of the half-value thickness (HVT) and the photon fluence per mAs (photons/mm2/mAs) for K-edge filters (50µm Rh; 50µm Ag) were compared with Al filters of varying thickness. Two strategies for matching the HVT from K-edge and Al filtered spectra were investigated: varying the kVp for fixedmore » Al thickness, or varying the Al thickness at matched kVp. For both strategies, Al filters were an order of magnitude thicker than K-edge filters. Hence, Monte Carlo simulations were conducted with the GEANT4 toolkit to determine if the scatter-to-primary ratio (SPR) and the point spread function of scatter (scatter PSF) differed between Al and K-edge filters. Results: Results show the potential for replacing currently used Kedge filters with Al. For fixed Al thickness (700µm), ±1 kVp and +(1–3) kVp change, matched HVT of Rh and Ag filtered spectra. At matched kVp, Al thickness range (650,750)µm and (750,860)µm matched the HVT from Rh and Ag filtered spectra. Photon fluence/mAs with Al filters were 1.5–2.5 times higher, depending on kVp and Al thickness, compared to K-edge filters. Although Al thickness was an order higher than K-edge filters, neither the SPR nor the scatter PSF differed from K-edge filters. Conclusion: The use of Al filters for digital mammography is potentially feasible. The increased fluence/mAs with Al could decrease exposure duration for the combined DBT+DM exam and simplify system design. Effect of x-ray spectrum change due to Al filtration on radiation dose, signal, noise, contrast and related metrics are being investigated. Funding support: Supported in part by NIH R21CA176470 and R01CA195512. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or NCI.« less
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Anabtawi, Nijad; Ferzli, Rony; Harmanani, Haidar M.
2017-01-01
This paper presents a step down, switched mode power converter for use in multi-standard envelope tracking radio frequency power amplifiers (RFPA). The converter is based on a programmable order sigma delta modulator that can be configured to operate with either 1st, 2nd, 3rd or 4th order loop filters, eliminating the need for a bulky passive output filter. Output ripple, sideband noise and spectral emission requirements of different wireless standards can be met by configuring the modulator’s filter order and converter’s sampling frequency. The proposed converter is entirely digital and is implemented in 14nm bulk CMOS process for post layout verification. For an input voltage of 3.3V, the converter’s output can be regulated to any voltage level from 0.5V to 2.5V, at a nominal switching frequency of 150MHz. It achieves a maximum efficiency of 94% at 1.5 W output power. PMID:28919657
The Power Plant Operating Data Based on Real-time Digital Filtration Technology
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie
2018-03-01
Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.
Nuclear counting filter based on a centered Skellam test and a double exponential smoothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan
2015-07-01
Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less
Effect of high latitude filtering on NWP skill
NASA Technical Reports Server (NTRS)
Kalnay, E.; Hoffman, R.; Takacs, L. L.
1983-01-01
An assessment is made of the extent to which polar filtering may seriously affect the skill of latitude-longitude NWP models, such as the U.S. Navy's NOGAPS, or the GLAS fourth-order model. The limited experiments which have been completed to date with the 4 x 5-deg, 9-level version of the latter model indicate that the high latitude filter currently in operation affects its forecasting skill very little, with only one exception in which the use of the PG filter significantly improved forecasting.
Jammed-array wideband sawtooth filter.
Tan, Zhongwei; Wang, Chao; Goda, Keisuke; Malik, Omer; Jalali, Bahram
2011-11-21
We present an all-optical passive low-cost spectral filter that exhibits a high-resolution periodic sawtooth spectral pattern without the need for active optoelectronic components. The principle of the filter is the partial masking of a phased array of virtual light sources with multiply jammed diffraction orders. We utilize the filter's periodic linear map between frequency and intensity to demonstrate fast sensitive interrogation of fiber Bragg grating sensor arrays and ultrahigh-frequency electrical sawtooth waveform generation. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Shlesinger, Michael F.
2012-01-01
We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).
Study of LCL filter performance for inverter fed grid connected system
NASA Astrophysics Data System (ADS)
Thamizh Thentral, T. M.; Geetha, A.; Subramani, C.
2018-04-01
The abandoned use of power electronic converters in the application of grid connected system paves a way for critical injected harmonics. Hence the use of filter becomes a significant play among the present scenario. Higher order passive filter is mostly preferred in this application because of its reduced cost and size. This paper focuses on the design of LCL filter for the reduction of injected harmonics. The reason behind choosing LCL filter is inductor sizing and good ripple component attenuation over the other conventional filters. This work is simulated in MATLAB platform and the results are prominent to the objectives mentioned above. Also, the simulation results are verified with the implemented hardware model.
NASA Technical Reports Server (NTRS)
Snow, Frank; Harman, Richard; Garrick, Joseph
1988-01-01
The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.
Super-resolution pupil filtering for visual performance enhancement using adaptive optics
NASA Astrophysics Data System (ADS)
Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun
2018-05-01
Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p < 0.05). As such, the proposed filter design could provide useful guidance for supernormal vision optical correction of the human eye.
Tatari, Karolina; Musovic, Sanin; Gülay, Arda; Dechesne, Arnaud; Albrechtsen, Hans-Jørgen; Smets, Barth F
2017-12-15
We investigated the density and distribution of total bacteria, canonical Ammonia Oxidizing Bacteria (AOB) (Nitrosomonas plus Nitrosospira), Ammonia Oxidizing Archaea (AOA), as well as Nitrobacter and Nitrospira in rapid sand filters used for groundwater treatment. To investigate the spatial distribution of these guilds, filter material was sampled at four drinking water treatment plants (DWTPs) in parallel filters of the pre- and after-filtration stages at different locations and depths. The target guilds were quantified by qPCR targeting 16S rRNA and amoA genes. Total bacterial densities (ignoring 16S rRNA gene copy number variation) were high and ranged from 10 9 to 10 10 per gram (10 15 to 10 16 per m 3 ) of filter material. All examined guilds, except AOA, were stratified at only one of the four DWTPs. Densities varied spatially within filter (intra-filter variation) at two of the DWTPs and in parallel filters (inter-filter variation) at one of the DWTPs. Variation analysis revealed random sampling as the most efficient strategy to yield accurate mean density estimates, with collection of at least 7 samples suggested to obtain an acceptable (below half order of magnitude) density precision. Nitrospira was consistently the most dominant guild (5-10% of total community), and was generally up to 4 orders of magnitude more abundant than Nitrobacter and up to 2 orders of magnitude more abundant than canonical AOBs. These results, supplemented with further analysis of the previously reported diversity of Nitrospira in the studied DWTPs based on 16S rRNA and nxrB gene phylogeny (Gülay et al., 2016; Palomo et al., 2016), indicate that the high Nitrospira abundance is due to their comammox (complete ammonia oxidation) physiology. AOA densities were lower than AOB densities, except in the highly stratified filters, where they were of similar abundance. In conclusion, rapid sand filters are microbially dense, with varying degrees of spatial heterogeneity, which requires replicate sampling for a sufficiently precise determination of total microbial community and specific population densities. A consistently high Nitrospira to bacterial and archaeal AOB density ratio suggests that non-canonical pathways for nitrification may dominate the examined RSFs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stochastic Integration H∞ Filter for Rapid Transfer Alignment of INS.
Zhou, Dapeng; Guo, Lei
2017-11-18
The performance of an inertial navigation system (INS) operated on a moving base greatly depends on the accuracy of rapid transfer alignment (RTA). However, in practice, the coexistence of large initial attitude errors and uncertain observation noise statistics poses a great challenge for the estimation accuracy of misalignment angles. This study aims to develop a novel robust nonlinear filter, namely the stochastic integration H ∞ filter (SIH ∞ F) for improving both the accuracy and robustness of RTA. In this new nonlinear H ∞ filter, the stochastic spherical-radial integration rule is incorporated with the framework of the derivative-free H ∞ filter for the first time, and the resulting SIH ∞ F simultaneously attenuates the negative effect in estimations caused by significant nonlinearity and large uncertainty. Comparisons between the SIH ∞ F and previously well-known methodologies are carried out by means of numerical simulation and a van test. The results demonstrate that the newly-proposed method outperforms the cubature H ∞ filter. Moreover, the SIH ∞ F inherits the benefit of the traditional stochastic integration filter, but with more robustness in the presence of uncertainty.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Johnson, Christopher M; Pate, Mariah B; Postma, Gregory N
2018-04-01
Standard KTP laser (potassium titanyl phosphate) wavelength-specific protective eyewear often impairs visualization of tissue changes during laser treatment. This sometimes necessitates eyewear removal to evaluate tissue effects, which wastes time and poses safety concerns. The objective was to determine if "virtual" or "electronic" chromoendoscopy filters, as found on some endoscopy platforms, could alleviate the restricted visualization inherent to protective eyewear. A KTP laser was applied to porcine laryngeal tissue and recorded via video laryngoscopy with 1 optical (Olympus Narrow Band Imaging) and 8 digital (Pentax Medical I-scan) chromoendoscopy filters. Videos were viewed by 11 otolaryngologists wearing protective eyewear. Using a discrete visual analog scale, they rated each filter on its ability to improve visualization,. No filter impaired visualization; 5 of 9 improved visualization. Based on statistical significance, the number of positive responses, and the lack of negative responses, narrow band imaging and the I-scan tone enhancement filter for leukoplakia performed best. These filters could shorten procedure time and improve safety; therefore, further clinical evaluation is warranted.
Temporal processing and adaptation in the songbird auditory forebrain.
Nagel, Katherine I; Doupe, Allison J
2006-09-21
Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.
Separation of man-made and natural patterns in high-altitude imagery of agricultural areas
NASA Technical Reports Server (NTRS)
Samulon, A. S.
1975-01-01
A nonstationary linear digital filter is designed and implemented which extracts the natural features from high-altitude imagery of agricultural areas. Essentially, from an original image a new image is created which displays information related to soil properties, drainage patterns, crop disease, and other natural phenomena, and contains no information about crop type or row spacing. A model is developed to express the recorded brightness in a narrow-band image in terms of man-made and natural contributions and which describes statistically the spatial properties of each. The form of the minimum mean-square error linear filter for estimation of the natural component of the scene is derived and a suboptimal filter is implemented. Nonstationarity of the two-dimensional random processes contained in the model requires a unique technique for deriving the optimum filter. Finally, the filter depends on knowledge of field boundaries. An algorithm for boundary location is proposed, discussed, and implemented.
Pharmacy students' test-taking motivation-effort on a low-stakes standardized test.
Waskiewicz, Rhonda A
2011-04-11
To measure third-year pharmacy students' level of motivation while completing the Pharmacy Curriculum Outcomes Assessment (PCOA) administered as a low-stakes test to better understand use of the PCOA as a measure of student content knowledge. Student motivation was manipulated through an incentive (ie, personal letter from the dean) and a process of statistical motivation filtering. Data were analyzed to determine any differences between the experimental and control groups in PCOA test performance, motivation to perform well, and test performance after filtering for low motivation-effort. Incentivizing students diminished the need for filtering PCOA scores for low effort. Where filtering was used, performance scores improved, providing a more realistic measure of aggregate student performance. To ensure that PCOA scores are an accurate reflection of student knowledge, incentivizing and/or filtering for low motivation-effort among pharmacy students should be considered fundamental best practice when the PCOA is administered as a low-stakes test.
Factors Affecting Hemodialysis Adequacy in Cohort of Iranian Patient with End Stage Renal Disease.
Shahdadi, Hosein; Balouchi, Abbas; Sepehri, Zahra; Rafiemanesh, Hosein; Magbri, Awad; Keikhaie, Fereshteh; Shahakzehi, Ahmad; Sarjou, Azizullah Arbabi
2016-08-01
There are many factors that can affect dialysis adequacy; such as the type of vascular access, filter type, device used, and the dose, and rout of erythropoietin stimulation agents (ESA) used. The aim of this study was investigating factors affecting Hemodialysis adequacy in cohort of Iranian patient with end stage renal disease (ESRD). This is a cross-sectional study conducted on 133 Hemodialysis patients referred to two dialysis units in Sistan-Baluchistan province in the cities of Zabol and Iranshahr, Iran. We have looked at, (the effects of the type of vascular access, the filter type, the device used, and the dose, route of delivery, and the type of ESA used) on Hemodialysis adequacy. Dialysis adequacy was calculated using kt/v formula, two-part information questionnaire including demographic data which also including access type, filter type, device used for hemodialysis (HD), type of Eprex injection, route of administration, blood groups and hemoglobin response to ESA were utilized. The data was analyzed using the SPSS v16 statistical software. Descriptive statistical methods, Mann-Whitney statistical test, and multiple regressions were used when applicable. The range of calculated dialysis adequacy is 0.28 to 2.39 (units of adequacy of dialysis). 76.7% of patients are being dialyzed via AVF and 23.3% of patients used central venous catheters (CVC). There was no statistical significant difference between dialysis adequacy, vascular access type, device used for HD (Fresenius and B. Braun), and the filter used for HD (p> 0.05). However, a significant difference was observed between the adequacy of dialysis and Eprex injection and patients' time of dialysis (p <0.05). Subcutaneous ESA (Eprex) injection and dialysis shift (being dialyzed in the morning) can have positive impact on dialysis adequacy. Patients should be educated on the facts that the type of device used for HD and the vascular access used has no significant effects on dialysis adequacy.
GPS vertical axis performance enhancement for helicopter precision landing approach
NASA Technical Reports Server (NTRS)
Denaro, Robert P.; Beser, Jacques
1986-01-01
Several areas were investigated for improving vertical accuracy for a rotorcraft using the differential Global Positioning System (GPS) during a landing approach. Continuous deltaranging was studied and the potential improvement achieved by estimating acceleration was studied by comparing the performance on a constant acceleration turn and a rough landing profile of several filters: a position-velocity (PV) filter, a position-velocity-constant acceleration (PVAC) filter, and a position-velocity-turning acceleration (PVAT) filter. In overall statistics, the PVAC filter was found to be most efficient with the more complex PVAT performing equally well. Vertical performance was not significantly different among the filters. Satellite selection algorithms based on vertical errors only (vertical dilution of precision or VDOP) and even-weighted cross-track and vertical errors (XVDOP) were tested. The inclusion of an altimeter was studied by modifying the PVAC filter to include a baro bias estimate. Improved vertical accuracy during degraded DOP conditions resulted. Flight test results for raw differential results excluding filter effects indicated that the differential performance significantly improved overall navigation accuracy. A landing glidepath steering algorithm was devised which exploits the flexibility of GPS in determining precise relative position. A method for propagating the steering command over the GPS update interval was implemented.
Correlation of Spatially Filtered Dynamic Speckles in Distance Measurement Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, Dmitry V.; Nippolainen, Ervin; Kamshilin, Alexei A.
2008-04-15
In this paper statistical properties of spatially filtered dynamic speckles are considered. This phenomenon was not sufficiently studied yet while spatial filtering is an important instrument for speckles velocity measurements. In case of spatial filtering speckle velocity information is derived from the modulation frequency of filtered light power which is measured by photodetector. Typical photodetector output is represented by a narrow-band random noise signal which includes non-informative intervals. Therefore more or less precious frequency measurement requires averaging. In its turn averaging implies uncorrelated samples. However, conducting research we found that correlation is typical property not only of dynamic speckle patternsmore » but also of spatially filtered speckles. Using spatial filtering the correlation is observed as a response of measurements provided to the same part of the object surface or in case of simultaneously using several adjacent photodetectors. Found correlations can not be explained using just properties of unfiltered dynamic speckles. As we demonstrate the subject of this paper is important not only from pure theoretical point but also from the point of applied speckle metrology. E.g. using single spatial filter and an array of photodetector can greatly improve accuracy of speckle velocity measurements.« less
Damping system for torsion modes of mirror isolation filters in TAMA300
NASA Astrophysics Data System (ADS)
Arase, Y.; Takahashi, R.; Arai, K.; Tatsumi, D.; Fukushima, M.; Yamazaki, T.; Fujimoto, Masa-Katsu; Agatsuma, K.; Nakagawa, N.
2008-07-01
The seismic attenuation system (SAS) in TAMA300 consists of a three-legged inverted pendulum and mirror isolation filters in order to provide a high level of seismic isolation. However, the mirror isolation filters have torsion modes with long decay time which disturb the interferometer operation for about half an hour if they get excited. In order to damp the torsion modes of the filters, we constructed a digital damping system using reflective photosensors with a large linear range. This system was installed to all of four SASs. By damping of the target torsion modes, the effective quality factors of the torsion modes are reduced to less than 10 or to unmeasurable level. This system is expected to reduce the inoperative period by the torsion mode excitation, and thus will contribute to improve the duty time of the gravitational wave detector.
A collaborative filtering recommendation algorithm based on weighted SimRank and social trust
NASA Astrophysics Data System (ADS)
Su, Chang; Zhang, Butao
2017-05-01
Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.
Alexandropoulou, Ioanna G; Konstantinidis, Theocharis G; Parasidis, Theodoros A; Nikolaidis, Christos; Panopoulou, Maria; Constantinidis, Theodoros C
2013-12-01
Recent findings have identified professional drivers as being at an increased risk of Legionnaires' disease. Our hypothesis was that used car cabin air filters represent a reservoir of Legionella bacteria, and thus a potential pathway for contamination. We analysed used cabin air filters from various types of car. The filters were analysed by culture and by molecular methods. Our findings indicated that almost a third of air filters were colonized with Legionella pneumophila. Here, we present the first finding of Legionella spp. in used car cabin air filters. Further investigations are needed in order to confirm this exposure pathway. The presence of Legionella bacteria in used cabin air filters may have been an unknown source of infection until now.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
NASA Astrophysics Data System (ADS)
Nagashima, Keisuke; Tsubouchi, Masaaki; Ochi, Yoshihiro; Maruyama, Momoko
2018-03-01
We have proposed an improved contact grating device for generating terahertz waves efficiently and have succeeded in developing the device with a very high diffraction efficiency and a wide spectral width. This device has a bi-angular filter and a Fabry-Perot-type structure, which are composed of dielectric multilayers. The bi-angular filter is designed to reflect the 0th-order wave and transmit the-1st-order diffraction wave. Numerical calculations indicate that the new device has a maximum diffraction efficiency over 99% and a spectral width of approximately 20 nm. We measured a high efficiency of 90% over a broad spectral range using a fabricated device.
An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks
NASA Astrophysics Data System (ADS)
El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros
2007-12-01
The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, S.; Moussy, J.-B., E-mail: jean-baptiste.moussy@cea.fr; Wei, P.
2014-05-05
NiFe{sub 2}O{sub 4}(111) ultrathin films (3–5 nm) have been grown by oxygen-assisted molecular beam epitaxy and integrated as effective spin-filter barriers. Structural and magnetic characterizations have been performed in order to investigate the presence of defects that could limit the spin filtering efficiency. These analyses have revealed the full strain relaxation of the layers with a cationic order in agreement with the inverse spinel structure but also the presence of antiphase boundaries. A spin-polarization up to +25% has been directly measured by the Meservey-Tedrow technique in Pt(111)/NiFe{sub 2}O{sub 4}(111)/γ-Al{sub 2}O{sub 3}(111)/Al tunnel junctions. The unexpected positive sign and relatively small valuemore » of the spin-polarization are discussed, in comparison with predictions and previous indirect tunnelling magnetoresistance measurements.« less
Chau, Destiny F; Vasilopoulos, Terrie; Schoepf, Miriam; Zhang, Christina; Fahy, Brenda G
2016-09-01
Complex surgical and critically ill pediatric patients rely on syringe infusion pumps for precise delivery of IV medications. Low flow rates and in-line IV filter use may affect drug delivery. To determine the effects of an in-line filter to remove air and/or contaminants on syringe pump performance at low flow rates, we compared the measured rates with the programmed flow rates with and without in-line IV filters. Standardized IV infusion assemblies with and without IV filters (filter and control groups) attached to a 10-mL syringe were primed and then loaded onto a syringe pump and connected to a 16-gauge, 16-cm single-lumen catheter. The catheter was suspended in a normal saline fluid column to simulate the back pressure from central venous circulation. The delivered infusate was measured by gravimetric methods at predetermined time intervals, and flow rate was calculated. Experimental trials for initial programmed rates of 1.0, 0.8, 0.6, and 0.4 mL/h were performed in control and filter groups. For each trial, the flow rate was changed to double the initial flow rate and was then returned to the initial flow rate to analyze pump performance for titration of rates often required during medication administration. These conditions (initial rate, doubling of initial rate, and return to initial rate) were analyzed separately for steady-state flow rate and time to steady state, whereas their average was used for percent deviation analysis. Differences between control and filter groups were assessed using Student t tests with adjustment for multiplicity (using n = 3 replications per group). Mean time from 0 to initial flow (startup delay) was <1 minute in both groups with no statistical difference between groups (P = 1.0). The average time to reach steady-state flow after infusion startup or rate changes was not statistically different between the groups (range, 0.8-5.5 minutes), for any flow rate or part of the trial (initial rate, doubling of initial rate, and return to initial rate), although the study was underpowered to detect small time differences. Overall, the mean steady-state flow rate for each trial was below the programmed flow rate with negative mean percent deviations for each trial. In the 1.0-mL/h initial rate trial, the steady-state flow rate attained was lower in the filter than the control group for the initial rate (P = 0.04) and doubling of initial rate (P = 0.04) with a trend during the return to initial rate (P = 0.06), although this same effect was not observed when doubling the initial rate trials of 0.8 or 0.6 mL/h or any other rate trials compared with the control group. With low flow rates used in complex surgical and pediatric critically ill patients, the addition of IV filters did not confer statistically significant changes in startup delay, flow variability, or time to reach steady-state flow of medications administered by syringe infusion pumps. The overall flow rate was lower than programmed flow rate with or without a filter.
Directional bilateral filters for smoothing fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Venkatesh, Manasij; Mohan, Kavya; Seelamantula, Chandra Sekhar
2015-08-01
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments.
Brasel, T L; Douglas, D R; Wilson, S C; Straus, D C
2005-01-01
Highly respirable particles (diameter, <1 microm) constitute the majority of particulate matter found in indoor air. It is hypothesized that these particles serve as carriers for toxic compounds, specifically the compounds produced by molds in water-damaged buildings. The presence of airborne Stachybotrys chartarum trichothecene mycotoxins on particles smaller than conidia (e.g., fungal fragments) was therefore investigated. Cellulose ceiling tiles with confluent Stachybotrys growth were placed in gas-drying containers through which filtered air was passed. Exiting particulates were collected by using a series of polycarbonate membrane filters with decreasing pore sizes. Scanning electron microscopy was employed to determine the presence of conidia on the filters. A competitive enzyme-linked immunosorbent assay (ELISA) specific for macrocyclic trichothecenes was used to analyze filter extracts. Cross-reactivity to various mycotoxins was examined to confirm the specificity. Statistically significant (P < 0.05) ELISA binding was observed primarily for macrocyclic trichothecenes at concentrations of 50 and 5 ng/ml and 500 pg/ml (58.4 to 83.5% inhibition). Of the remaining toxins tested, only verrucarol and diacetylverrucarol (nonmacrocyclic trichothecenes) demonstrated significant binding (18.2 and 51.7% inhibition, respectively) and then only at high concentrations. The results showed that extracts from conidium-free filters demonstrated statistically significant (P < 0.05) antibody binding that increased with sampling time (38.4 to 71.9% inhibition, representing a range of 0.5 to 4.0 ng/ml). High-performance liquid chromatography analysis suggested the presence of satratoxin H in conidium-free filter extracts. These data show that S. chartarum trichothecene mycotoxins can become airborne in association with intact conidia or smaller particles. These findings may have important implications for indoor air quality assessment.
Reconstructing Spectral Scenes Using Statistical Estimation to Enhance Space Situational Awareness
2006-12-01
simultane- ously spatially and spectrally deblur the images collected from ASIS. The algorithms are based on proven estimation theories and do not...collected with any system using a filtering technology known as Electronic Tunable Filters (ETFs). Previous methods to deblur spectral images collected...spectrally deblurring then the previously investigated methods. This algorithm expands on a method used for increasing the spectral resolution in gamma-ray
NASA Astrophysics Data System (ADS)
Liu, Chanjuan; van Netten, Jaap J.; Klein, Marvin E.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi
2013-12-01
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.
Harmonic distortion in microwave photonic filters.
Rius, Manuel; Mora, José; Bolea, Mario; Capmany, José
2012-04-09
We present a theoretical and experimental analysis of nonlinear microwave photonic filters. Far from the conventional condition of low modulation index commonly used to neglect high-order terms, we have analyzed the harmonic distortion involved in microwave photonic structures with periodic and non-periodic frequency responses. We show that it is possible to design microwave photonic filters with reduced harmonic distortion and high linearity even under large signal operation.
Spectral imagery with an acousto-optic tunable filter
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Schempp, W. V.; Conner, C. P.; Katzka, P.
1987-01-01
.A spectral imager for astronomy and aeronomy has been fabricated using collinear or non-collinear acoustooptic tunable filters (AOTFs). The AOTF provides high transparency, rapid tunability over a wide wavelength range, a capability of varying the bandwidth by more than an order of magnitude, high etendue, and linearly polarized output. Some typical observational applications of acoustooptic tunable filters used in several configurations at astronomical telescopes are demonstrated.
Attitude Error Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2002-01-01
The quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation. The quaternion must obey a unit norm constraint, though, which has led to the development of an extended Kalman filter using a quaternion for the global attitude estimate and a three-component representation for attitude errors. We consider various attitude error representations for this Multiplicative Extended Kalman Filter and its second-order extension.
Using volcano plots and regularized-chi statistics in genetic association studies.
Li, Wentian; Freudenberg, Jan; Suh, Young Ju; Yang, Yaning
2014-02-01
Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Research on the Ordered Mesoporous Silica for Tobacco Harm Reduction
NASA Astrophysics Data System (ADS)
Wang, Y.; Y Li, Z.; Ding, J. X.; Hu, Z. J.; Liu, Z.; Zhou, G.; Huang, T. H.
2017-12-01
For reducting tobacco harm, this paper prepared an ordered mesoporous silica by using triblock copolymer Pluronic P123 as template. The property of this material was characterized by the X-ray scattering spectrum(XRD), Transmission electron microscopy(TEM), Scanning electron microscopy (SEM) and Nitrogen adsorption/desorption. Then this ordered mesoporous silica was added into the cigarette filter in order to researching its effect of cigarette harm index. The result shows that the feature of SBA-15 was grain morphology, ordered arrangement, tubular porous 2-D hexagonal structure. The application of SBA-15 in cigarette filter can selectively reduce harmful components in cigarette smoke such as crotonaldehyde, hydrogen cyanide, benzo pyrene and tar. The synthesized SBA-15 could properly reduce cigarette harm index.
On Applications of Pyramid Doubly Joint Bilateral Filtering in Dense Disparity Propagation
NASA Astrophysics Data System (ADS)
Abadpour, Arash
2014-06-01
Stereopsis is the basis for numerous tasks in machine vision, robotics, and 3D data acquisition and processing. In order for the subsequent algorithms to function properly, it is important that an affordable method exists that, given a pair of images taken by two cameras, can produce a representation of disparity or depth. This topic has been an active research field since the early days of work on image processing problems and rich literature is available on the topic. Joint bilateral filters have been recently proposed as a more affordable alternative to anisotropic diffusion. This class of image operators utilizes correlation in multiple modalities for purposes such as interpolation and upscaling. In this work, we develop the application of bilateral filtering for converting a large set of sparse disparity measurements into a dense disparity map. This paper develops novel methods for utilizing bilateral filters in joint, pyramid, and doubly joint settings, for purposes including missing value estimation and upscaling. We utilize images of natural and man-made scenes in order to exhibit the possibilities offered through the use of pyramid doubly joint bilateral filtering for stereopsis.
Vargas-Meléndez, Leandro; Boada, Beatriz L; Boada, María Jesús L; Gauchía, Antonio; Díaz, Vicente
2016-08-31
This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a "pseudo-roll angle" through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors' estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Vargas-Meléndez, Leandro; Boada, Beatriz L.; Boada, María Jesús L.; Gauchía, Antonio; Díaz, Vicente
2016-01-01
This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a “pseudo-roll angle” through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors’ estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator. PMID:27589763
NASA Astrophysics Data System (ADS)
Salem, Mohamed Shaker; Abdelaleem, Asmaa Mohamed; El-Gamal, Abear Abdullah; Amin, Mohamed
2017-01-01
One-dimensional silicon-based photonic crystals are formed by the electrochemical anodization of silicon substrates in hydrofluoric acid-based solution using an appropriate current density profile. In order to create a multi-band optical filter, two fabrication approaches are compared and discussed. The first approach utilizes a current profile composed of a linear combination of sinusoidal current waveforms having different frequencies. The individual frequency of the waveform maps to a characteristic stop band in the reflectance spectrum. The stopbands of the optical filter created by the second approach, on the other hand, are controlled by stacking multiple porous silicon rugate multilayers having different fabrication conditions. The morphology of the resulting optical filters is tuned by controlling the electrolyte composition and the type of the silicon substrate. The reduction of sidelobes arising from the interference in the multilayers is observed by applying an index matching current profile to the anodizing current waveform. In order to stabilize the resulting optical filters against natural oxidation, atomic layer deposition of silicon dioxide on the pore wall is employed.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
A Statistical Analysis of IrisCode and Its Security Implications.
Kong, Adams Wai-Kin
2015-03-01
IrisCode has been used to gather iris data for 430 million people. Because of the huge impact of IrisCode, it is vital that it is completely understood. This paper first studies the relationship between bit probabilities and a mean of iris images (The mean of iris images is defined as the average of independent iris images.) and then uses the Chi-square statistic, the correlation coefficient and a resampling algorithm to detect statistical dependence between bits. The results show that the statistical dependence forms a graph with a sparse and structural adjacency matrix. A comparison of this graph with a graph whose edges are defined by the inner product of the Gabor filters that produce IrisCodes shows that partial statistical dependence is induced by the filters and propagates through the graph. Using this statistical information, the security risk associated with two patented template protection schemes that have been deployed in commercial systems for producing application-specific IrisCodes is analyzed. To retain high identification speed, they use the same key to lock all IrisCodes in a database. The belief has been that if the key is not compromised, the IrisCodes are secure. This study shows that even without the key, application-specific IrisCodes can be unlocked and that the key can be obtained through the statistical dependence detected.
Radon anomalies: When are they possible to be detected?
NASA Astrophysics Data System (ADS)
Passarelli, Luigi; Woith, Heiko; Seyis, Cemil; Nikkhoo, Mehdi; Donner, Reik
2017-04-01
Records of the Radon noble gas in different environments like soil, air, groundwater, rock, caves, and tunnels, typically display cyclic variations including diurnal (S1), semidiurnal (S2) and seasonal components. But there are also cases where theses cycles are absent. Interestingly, radon emission can also be affected by transient processes, which inhibit or enhance the radon carrying process at the surface. This results in transient changes in the radon emission rate, which are superimposed on the low and high frequency cycles. The complexity in the spectral contents of the radon time-series makes any statistical analysis aiming at understanding the physical driving processes a challenging task. In the past decades there have been several attempts to relate changes in radon emission rate with physical triggering processes such as earthquake occurrence. One of the problems in this type of investigation is to objectively detect anomalies in the radon time-series. In the present work, we propose a simple and objective statistical method for detecting changes in the radon emission rate time-series. The method uses non-parametric statistical tests (e.g., Kolmogorov-Smirnov) to compare empirical distributions of radon emission rate by sequentially applying various time window to the time-series. The statistical test indicates whether two empirical distributions of data originate from the same distribution at a desired significance level. We test the algorithm on synthetic data in order to explore the sensitivity of the statistical test to the sample size. We successively apply the test to six radon emission rate recordings from stations located around the Marmara Sea obtained within the MARsite project (MARsite has received funding from the European Union's Seventh Programme for research, technological development and demonstration under grant agreement No 308417). We conclude that the test performs relatively well on identify transient changes in the radon emission rate, but the results are strongly dependent on the length of the time window and/or type of frequency filtering. More importantly, when raw time-series contain cyclic components (e.g. seasonal or diurnal variation), the quest of anomalies related to transients becomes meaningless. We conclude that an objective identification of transient changes can be performed only after filtering the raw time-series for the physically meaningful frequency content.