An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Filtering with Marked Point Process Observations via Poisson Chaos Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com
2013-06-15
We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less
Multi-star processing and gyro filtering for the video inertial pointing system
NASA Technical Reports Server (NTRS)
Murphy, J. P.
1976-01-01
The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation
NASA Astrophysics Data System (ADS)
Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.
2018-05-01
Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models
2015-07-06
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms
Spectral analysis and filtering techniques in digital spatial data processing
Pan, Jeng-Jong
1989-01-01
A filter toolbox has been developed at the EROS Data Center, US Geological Survey, for retrieving or removing specified frequency information from two-dimensional digital spatial data. This filter toolbox provides capabilities to compute the power spectrum of a given data and to design various filters in the frequency domain. Three types of filters are available in the toolbox: point filter, line filter, and area filter. Both the point and line filters employ Gaussian-type notch filters, and the area filter includes the capabilities to perform high-pass, band-pass, low-pass, and wedge filtering techniques. These filters are applied for analyzing satellite multispectral scanner data, airborne visible and infrared imaging spectrometer (AVIRIS) data, gravity data, and the digital elevation models (DEM) data. -from Author
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
Meyer, Brian K; Vargas, Diego
2006-01-01
The following study was conducted to determine the effect of different preservatives commonly used in the biopharmaceutical industry on the product-specific bubble point of sterilizing-grade filters when used to filter product processed with different types of tubing. The preservatives tested were 0.25% phenol, m-cresol, and benzyl alcohol. The tubing tested was Sani-Pure (platinum-cured silicone tubing), Versilic (peroxide-cured silicone tubing), C-Flex, Pharmed, and Cole-Parmer (BioPharm silicone tubing). The product-specific bubble point values of sterilizing grade filters were measured after the recirculation of product through the filter and tubing of different types of materials for a total contact time of 15 h. When silicone tubing was used, the post-recirculation product-specific bubble point was suppressed on average 13 psig when compared to the pre- recirculation product-specific bubble point. Suppression was also observed with C-Flex, but to a much lesser extent than with silicone tubing. Suppression was not observed with Pharmed or BioPharm tubing. Alcohol extractions performed on the filters that experienced suppressed bubble points followed by Fourier transform infrared spectroscopy analysis indicated the filters contained poly(dimethylsiloxane). Direct addition of poly(dimethlysiloxane) to solutions filtered through sterilizing-grade filters suppressed the filter bubble points when tested for integrity. Silicone oils most likely reduced the surface tension of the pores in the membrane, resulting in the ability of air (or nitrogen) to pass more freely through the membrane, causing suppressed bubble point test values. The results of these studies indicate that product-specific bubble point of a filter determined with only product may not reflect the true bubble point for preservative-containing products that are recirculated or contacted with certain tubing for 15 h or greater. In addition, tubing material placed in contact with products containing preservatives should be evaluated for impact to the product-specific bubble point when being utilized with sterilizing-grade filters.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
NASA Astrophysics Data System (ADS)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
Fast ground filtering for TLS data via Scanline Density Analysis
NASA Astrophysics Data System (ADS)
Che, Erzhuo; Olsen, Michael J.
2017-07-01
Terrestrial Laser Scanning (TLS) efficiently collects 3D information based on lidar (light detection and ranging) technology. TLS has been widely used in topographic mapping, engineering surveying, forestry, industrial facilities, cultural heritage, and so on. Ground filtering is a common procedure in lidar data processing, which separates the point cloud data into ground points and non-ground points. Effective ground filtering is helpful for subsequent procedures such as segmentation, classification, and modeling. Numerous ground filtering algorithms have been developed for Airborne Laser Scanning (ALS) data. However, many of these are error prone in application to TLS data because of its different angle of view and highly variable resolution. Further, many ground filtering techniques are limited in application within challenging topography and experience difficulty coping with some objects such as short vegetation, steep slopes, and so forth. Lastly, due to the large size of point cloud data, operations such as data traversing, multiple iterations, and neighbor searching significantly affect the computation efficiency. In order to overcome these challenges, we present an efficient ground filtering method for TLS data via a Scanline Density Analysis, which is very fast because it exploits the grid structure storing TLS data. The process first separates the ground candidates, density features, and unidentified points based on an analysis of point density within each scanline. Second, a region growth using the scan pattern is performed to cluster the ground candidates and further refine the ground points (clusters). In the experiment, the effectiveness, parameter robustness, and efficiency of the proposed method is demonstrated with datasets collected from an urban scene and a natural scene, respectively.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
Noise removal in extended depth of field microscope images through nonlinear signal processing.
Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J
2013-04-01
Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.
Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model
NASA Astrophysics Data System (ADS)
Zhua, Ningning; Jiaa, Yonghong; Luo, Lun
2016-06-01
The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information
NASA Astrophysics Data System (ADS)
Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.
2017-09-01
Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
NASA Astrophysics Data System (ADS)
Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.
2011-04-01
The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.
40 CFR 63.600 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... wet-process phosphoric acid process line: reactors, filters, evaporators, and hot wells; (2) Each... following emission points which are components of a superphosphoric acid process line: evaporators, hot...
Distributed processing of a GPS receiver network for a regional ionosphere map
NASA Astrophysics Data System (ADS)
Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun
2018-01-01
This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Robust cubature Kalman filter for GNSS/INS with missing observations and colored measurement noise.
Cui, Bingbo; Chen, Xiyuan; Tang, Xihua; Huang, Haoqian; Liu, Xiao
2018-01-01
In order to improve the accuracy of GNSS/INS working in GNSS-denied environment, a robust cubature Kalman filter (RCKF) is developed by considering colored measurement noise and missing observations. First, an improved cubature Kalman filter (CKF) is derived by considering colored measurement noise, where the time-differencing approach is applied to yield new observations. Then, after analyzing the disadvantages of existing methods, the measurement augment in processing colored noise is translated into processing the uncertainties of CKF, and new sigma point update framework is utilized to account for the bounded model uncertainties. By reusing the diffused sigma points and approximation residual in the prediction stage of CKF, the RCKF is developed and its error performance is analyzed theoretically. Results of numerical experiment and field test reveal that RCKF is more robust than CKF and extended Kalman filter (EKF), and compared with EKF, the heading error of land vehicle is reduced by about 72.4%. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.
2011-12-01
Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Structural Information Detection Based Filter for GF-3 SAR Images
NASA Astrophysics Data System (ADS)
Sun, Z.; Song, Y.
2018-04-01
GF-3 satellite with high resolution, large swath, multi-imaging mode, long service life and other characteristics, can achieve allweather and all day monitoring for global land and ocean. It has become the highest resolution satellite system in the world with the C-band multi-polarized synthetic aperture radar (SAR) satellite. However, due to the coherent imaging system, speckle appears in GF-3 SAR images, and it hinders the understanding and interpretation of images seriously. Therefore, the processing of SAR images has big challenges owing to the appearance of speckle. The high-resolution SAR images produced by the GF-3 satellite are rich in information and have obvious feature structures such as points, edges, lines and so on. The traditional filters such as Lee filter and Gamma MAP filter are not appropriate for the GF-3 SAR images since they ignore the structural information of images. In this paper, the structural information detection based filter is constructed, successively including the point target detection in the smallest window, the adaptive windowing method based on regional characteristics, and the most homogeneous sub-window selection. The despeckling experiments on GF-3 SAR images demonstrate that compared with the traditional filters, the proposed structural information detection based filter can well preserve the points, edges and lines as well as smooth the speckle more sufficiently.
NASA Astrophysics Data System (ADS)
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1989-01-01
An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.
A graph signal filtering-based approach for detection of different edge types on airborne lidar data
NASA Astrophysics Data System (ADS)
Bayram, Eda; Vural, Elif; Alatan, Aydin
2017-10-01
Airborne Laser Scanning is a well-known remote sensing technology, which provides a dense and highly accurate, yet unorganized point cloud of earth surface. During the last decade, extracting information from the data generated by airborne LiDAR systems has been addressed by many studies in geo-spatial analysis and urban monitoring applications. However, the processing of LiDAR point clouds is challenging due to their irregular structure and 3D geometry. In this study, we propose a novel framework for the detection of the boundaries of an object or scene captured by LiDAR. Our approach is motivated by edge detection techniques in vision research and it is established on graph signal filtering which is an exciting and promising field of signal processing for irregular data types. Due to the convenient applicability of graph signal processing tools on unstructured point clouds, we achieve the detection of the edge points directly on 3D data by using a graph representation that is constructed exclusively to answer the requirements of the application. Moreover, considering the elevation data as the (graph) signal, we leverage aerial characteristic of the airborne LiDAR data. The proposed method can be employed both for discovering the jump edges on a segmentation problem and for exploring the crease edges on a LiDAR object on a reconstruction/modeling problem, by only adjusting the filter characteristics.
NASA Astrophysics Data System (ADS)
Brakensiek, Nickolas L.; Kidd, Brian; Mesawich, Michael; Stevens, Don, Jr.; Gotlinsky, Barry
2003-06-01
A design of experiment (DOE) was implemented to show the effects of various point of use filters on the coat process. The DOE takes into account the filter media, pore size, and pumping means, such as dispense pressure, time, and spin speed. The coating was executed on a TEL Mark 8 coat track, with an IDI M450 pump, and PALL 16 stack Falcon filters. A KLA 2112 set at 0.69 μm pixel size was used to scan the wafers to detect and identify the defects. The process found for DUV42P to maintain a low defect coating irrespective of the filter or pore size is a high start pressure, low end pressure, low dispense time, and high dispense speed. The IDI M450 pump has the capability to compensate for bubble type defects by venting the defects out of the filter before the defects are in the dispense line and the variable dispense rate allows the material in the dispense line to slow down at the end of dispense and not create microbubbles in the dispense line or tip. Also the differential pressure sensor will alarm if the pressure differential across the filter increases over a user-determined setpoint. The pleat design allows more surface area in the same footprint to reduce the differential pressure across the filter and transport defects to the vent tube. The correct low defect coating process will maximize the advantage of reducing filter pore size or changing the filter media.
On selecting satellite conjunction filter parameters
NASA Astrophysics Data System (ADS)
Alfano, Salvatore; Finkleman, David
2014-06-01
This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.
Are local filters blind to provenance? Ant seed predation suppresses exotic plants more than natives
Dean E. Pearson; Nadia S. Icasatti; Jose L. Hierro; Benjamin J. Bird
2014-01-01
The question of whether species' origins influence invasion outcomes has been a point of substantial debate in invasion ecology. Theoretically, colonization outcomes can be predicted based on how species' traits interact with community filters, a process presumably blind to species' origins. Yet, exotic plant introductions commonly result in monospecific...
On-board attitude determination for the Explorer Platform satellite
NASA Technical Reports Server (NTRS)
Jayaraman, C.; Class, B.
1992-01-01
This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.
Spin Filtering in Storage Rings
NASA Astrophysics Data System (ADS)
Nikolaev, N. N.; Pavlov, F. F.
The spin filtering in storage rings is based on a multiple passage of a stored beam through a polarized internal gas target. Apart from the polarization by the spin-dependent transmission, a unique geometrical feature of interaction with the target in such a filtering process, pointed out by H.O. Meyer,1 is a scattering of stored particles within the beam. A rotation of the spin in the scattering process affects the polarization buildup. We derive here a quantum-mechanical evolution equation for the spin-density matrix of a stored beam which incorporates the scattering within the beam. We show how the interplay of the transmission and scattering within the beam changes from polarized electrons to polarized protons in the atomic target. After discussions of the FILTEX results on the filtering of stored protons,2 we comment on the strategy of spin filtering of antiprotons for the PAX experiment at GSI FAIR.3.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Decomposition of cellulose by ultrasonic welding in water
NASA Astrophysics Data System (ADS)
Nomura, Shinfuku; Miyagawa, Seiya; Mukasa, Shinobu; Toyota, Hiromichi
2016-07-01
The use of ultrasonic welding in water to decompose cellulose placed in water was examined experimentally. Filter paper was used as the decomposition material with a horn-type transducer 19.5 kHz adopted as the ultrasonic welding power source. The frictional heat at the point where the surface of the tip of the ultrasonic horn contacts the filter paper decomposes the cellulose in the filter paper into 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharide through hydrolysis and thermolysis that occurs in the welding process.
A P-band SAR interference filter
NASA Technical Reports Server (NTRS)
Taylor, Victor B.
1992-01-01
The synthetic aperture radar (SAR) interference filter is an adaptive filter designed to reduce the effects of interference while minimizing the introduction of undesirable side effects. The author examines the adaptive spectral filter and the improvement in processed SAR imagery using this filter for Jet Propulsion Laboratory Airborne SAR (JPL AIRSAR) data. The quality of these improvements is determined through several data fidelity criteria, such as point-target impulse response, equivalent number of looks, SNR, and polarization signatures. These parameters are used to characterize two data sets, both before and after filtering. The first data set consists of data with the interference present in the original signal, and the second set consists of clean data which has been coherently injected with interference acquired from another scene.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Effects of dispense equipment sequence on process start-up defects
NASA Astrophysics Data System (ADS)
Brakensiek, Nick; Sevegney, Michael
2013-03-01
Photofluid dispense systems within coater/developer tools have been designed with the intent to minimize cost of ownership to the end user. Waste and defect minimization, dispense quality and repeatability, and ease of use are all desired characteristics. One notable change within commercially available systems is the sequence in which process fluid encounters dispense pump and filtration elements. Traditionally, systems adopted a pump-first sequence, where fluid is "pushed through" a point-of-use filter just prior to dispensing on the wafer. Recently, systems configured in a pump-last scheme have become available, where fluid is "pulled through" the filter, into the pump, and then is subsequently dispensed. The present work constitutes a comparative evaluation of the two equipment sequences with regard to the aforementioned characteristics that impact cost of ownership. Additionally, removal rating and surface chemistry (i.e., hydrophilicity) of the point-of-use filter are varied in order to evaluate their influence on system start-up and defects.
Experimental comparison of point-of-use filters for drinking water ultrafiltration.
Totaro, M; Valentini, P; Casini, B; Miccoli, M; Costa, A L; Baggiani, A
2017-06-01
Waterborne pathogens such as Pseudomonas spp. and Legionella spp. may persist in hospital water networks despite chemical disinfection. Point-of-use filtration represents a physical control measure that can be applied in high-risk areas to contain the exposure to such pathogens. New technologies have enabled an extension of filters' lifetimes and have made available faucet hollow-fibre filters for water ultrafiltration. To compare point-of-use filters applied to cold water within their period of validity. Faucet hollow-fibre filters (filter A), shower hollow-fibre filters (filter B) and faucet membrane filters (filter C) were contaminated in two different sets of tests with standard bacterial strains (Pseudomonas aeruginosa DSM 939 and Brevundimonas diminuta ATCC 19146) and installed at points-of-use. Every day, from each faucet, 100 L of water was flushed. Before and after flushing, 250 mL of water was collected and analysed for microbiology. There was a high capacity of microbial retention from filter C; filter B released only low Brevundimonas spp. counts; filter A showed poor retention of both micro-organisms. Hollow-fibre filters did not show good micro-organism retention. All point-of-use filters require an appropriate maintenance of structural parameters to ensure their efficiency. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
NASA Astrophysics Data System (ADS)
Ji, S.; Yuan, X.
2016-06-01
A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
The Ensemble Kalman filter: a signal processing perspective
NASA Astrophysics Data System (ADS)
Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik
2017-12-01
The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.
Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.
2016-01-20
The initial set of candidate hypotheses provides a useful starting point for quantitative modeling and adaptive management of the river and species. We anticipate that hypotheses will change from the set of working management hypotheses as adaptive management progresses. More importantly, hypotheses that have been filtered out of our multistep process are still being considered. These filtered hypotheses are archived and if existing hypotheses are determined to be inadequate to explain observed population dynamics, new hypotheses can be created or filtered hypotheses can be reinstated.
Data analysis using scale-space filtering and Bayesian probabilistic reasoning
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter
1991-01-01
This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
Comparison of Sigma-Point and Extended Kalman Filters on a Realistic Orbit Determination Scenario
NASA Technical Reports Server (NTRS)
Gaebler, John; Hur-Diaz. Sun; Carpenter, Russell
2010-01-01
Sigma-point filters have received a lot of attention in recent years as a better alternative to extended Kalman filters for highly nonlinear problems. In this paper, we compare the performance of the additive divided difference sigma-point filter to the extended Kalman filter when applied to orbit determination of a realistic operational scenario based on the Interstellar Boundary Explorer mission. For the scenario studied, both filters provided equivalent results. The performance of each is discussed in detail.
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
NASA Astrophysics Data System (ADS)
Xie, Yiwei; Geng, Zihan; Zhuang, Leimeng; Burla, Maurizio; Taddei, Caterina; Hoekman, Marcel; Leinse, Arne; Roeloffzen, Chris G. H.; Boller, Klaus-J.; Lowery, Arthur J.
2017-12-01
Integrated optical signal processors have been identified as a powerful engine for optical processing of microwave signals. They enable wideband and stable signal processing operations on miniaturized chips with ultimate control precision. As a promising application, such processors enables photonic implementations of reconfigurable radio frequency (RF) filters with wide design flexibility, large bandwidth, and high-frequency selectivity. This is a key technology for photonic-assisted RF front ends that opens a path to overcoming the bandwidth limitation of current digital electronics. Here, the recent progress of integrated optical signal processors for implementing such RF filters is reviewed. We highlight the use of a low-loss, high-index-contrast stoichiometric silicon nitride waveguide which promises to serve as a practical material platform for realizing high-performance optical signal processors and points toward photonic RF filters with digital signal processing (DSP)-level flexibility, hundreds-GHz bandwidth, MHz-band frequency selectivity, and full system integration on a chip scale.
Wegmann, Markus; Michen, Benjamin; Luxbacher, Thomas; Fritsch, Johannes; Graule, Thomas
2008-03-01
The purpose of this study was to test the feasibility of modifying commercial microporous ceramic bacteria filters to promote adsorption of viruses. The internal surface of the filter medium was coated with ZrO(2) nanopowder via dip-coating and heat-treatment in order to impart a filter surface charge opposite to that of the target viruses. Streaming potential measurements revealed a shift in the isoelectric point from pH <3 to between pH 5.5 and 9, respectively. While the base filter elements generally exhibited only 75% retention with respect to MS2 bacteriophages, the modified elements achieved a 7log removal (99.99999%) of these virus-like particles. The coating process also increased the specific surface area of the filters from approximately 2m(2)/g to between 12.5 and 25.5m(2)/g, thereby also potentially increasing their adsorption capacity. The results demonstrate that, given more development effort, the chosen manufacturing process has the potential to yield effective virus filters with throughputs superior to those of current virus filtration techniques.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Application of square-root filtering for spacecraft attitude control
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Schmidt, S. F.; Goka, T.
1978-01-01
Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Pototzky, Anthony S.
1989-01-01
A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances.
Are Local Filters Blind to Provenance? Ant Seed Predation Suppresses Exotic Plants More than Natives
Pearson, Dean E.; Icasatti, Nadia S.; Hierro, Jose L.; Bird, Benjamin J.
2014-01-01
The question of whether species’ origins influence invasion outcomes has been a point of substantial debate in invasion ecology. Theoretically, colonization outcomes can be predicted based on how species’ traits interact with community filters, a process presumably blind to species’ origins. Yet, exotic plant introductions commonly result in monospecific plant densities not commonly seen in native assemblages, suggesting that exotic species may respond to community filters differently than natives. Here, we tested whether exotic and native species differed in their responses to a local community filter by examining how ant seed predation affected recruitment of eighteen native and exotic plant species in central Argentina. Ant seed predation proved to be an important local filter that strongly suppressed plant recruitment, but ants suppressed exotic recruitment far more than natives (89% of exotic species vs. 22% of natives). Seed size predicted ant impacts on recruitment independent of origins, with ant preference for smaller seeds resulting in smaller seeded plant species being heavily suppressed. The disproportionate effects of provenance arose because exotics had generally smaller seeds than natives. Exotics also exhibited greater emergence and earlier peak emergence than natives in the absence of ants. However, when ants had access to seeds, these potential advantages of exotics were negated due to the filtering bias against exotics. The differences in traits we observed between exotics and natives suggest that higher-order introduction filters or regional processes preselected for certain exotic traits that then interacted with the local seed predation filter. Our results suggest that the interactions between local filters and species traits can predict invasion outcomes, but understanding the role of provenance will require quantifying filtering processes at multiple hierarchical scales and evaluating interactions between filters. PMID:25099535
Bacterial community structure in the drinking water microbiome is governed by filtration processes.
Pinto, Ameet J; Xi, Chuanwu; Raskin, Lutgarde
2012-08-21
The bacterial community structure of a drinking water microbiome was characterized over three seasons using 16S rRNA gene based pyrosequencing of samples obtained from source water (a mix of a groundwater and a surface water), different points in a drinking water plant operated to treat this source water, and in the associated drinking water distribution system. Even though the source water was shown to seed the drinking water microbiome, treatment process operations limit the source water's influence on the distribution system bacterial community. Rather, in this plant, filtration by dual media rapid sand filters played a primary role in shaping the distribution system bacterial community over seasonal time scales as the filters harbored a stable bacterial community that seeded the water treatment processes past filtration. Bacterial taxa that colonized the filter and sloughed off in the filter effluent were able to persist in the distribution system despite disinfection of finished water by chloramination and filter backwashing with chloraminated backwash water. Thus, filter colonization presents a possible ecological survival strategy for bacterial communities in drinking water systems, which presents an opportunity to control the drinking water microbiome by manipulating the filter microbial community. Grouping bacterial taxa based on their association with the filter helped to elucidate relationships between the abundance of bacterial groups and water quality parameters and showed that pH was the strongest regulator of the bacterial community in the sampled drinking water system.
ARTSN: An Automated Real-Time Spacecraft Navigation System
NASA Technical Reports Server (NTRS)
Burkhart, P. Daniel; Pollmeier, Vincent M.
1996-01-01
As part of the Deep Space Network (DSN) advanced technology program an effort is underway to design a filter to automate the deep space navigation process.The automated real-time spacecraft navigation (ARTSN) filter task is based on a prototype consisting of a FORTRAN77 package operating on an HP-9000/700 workstation running HP-UX 9.05. This will be converted to C, and maintained as the operational version. The processing tasks required are: (1) read a measurement, (2) integrate the spacecraft state to the current measurement time, (3) compute the observable based on the integrated state, and (4) incorporate the measurement information into the state using an extended Kalman filter. This filter processes radiometric data collected by the DSN. The dynamic (force) models currently include point mass gravitational terms for all planets, the Sun and Moon, solar radiation pressure, finite maneuvers, and attitude maintenance activity modeled quadratically. In addition, observable errors due to troposphere are included. Further data types, force and observable models will be ncluded to enhance the accuracy of the models and the capability of the package. The heart of the ARSTSN is a currently available continuous-discrete extended Kalman filter. Simulated data used to test the implementation at various stages of development and the results from processing actual mission data are presented.
NASA Astrophysics Data System (ADS)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler
2017-06-01
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-01-01
AIM: To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. METHODS: The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. RESULTS: In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. CONCLUSION: In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition. PMID:9893748
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-08-01
To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
NASA Astrophysics Data System (ADS)
Quaranta, Giorgio; Basset, Guillaume; Benes, Zdenek; Martin, Olivier J. F.; Gallinet, Benjamin
2018-01-01
Resonant waveguide gratings (RWGs) are thin-film structures, where coupled modes interfere with the diffracted incoming wave and produce strong angular and spectral filtering. The combination of two finite-length and impedance matched RWGs allows the creation of a passive beam steering element, which is compatible with up-scalable fabrication processes. Here, we propose a design method to create large patterns of such elements able to filter, steer, and focus the light from one point source to another. The method is based on ellipsoidal mirrors to choose a system of confocal prolate spheroids where the two focal points are the source point and observation point, respectively. It allows finding the proper orientation and position of each RWG element of the pattern, such that the phase is constructively preserved at the observation point. The design techniques presented here could be implemented in a variety of systems, where large-scale patterns are needed, such as optical security, multifocal or monochromatic lenses, biosensors, and see-through optical combiners for near-eye displays.
HEPA Filter Disposal Write-Up 10/19/16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loll, C.
Process knowledge (PK) collection on HEPA filters is handled via the same process as other waste streams at LLNL. The Field technician or Characterization point of contact creates an information gathering document (IGD) in the IGD database, with input provided from the generator, and submits it for electronic approval. This document is essentially a waste generation profile, detailing the physical, chemical as well as radiological characteristics, and hazards, of a waste stream. It will typically contain a general, but sometimes detailed, description of the work processes which generated the waste. It will contain PK as well as radiological and industrialmore » hygiene analytical swipe results, and any other analytical or other supporting knowledge related to characterization. The IGD goes through an electronic approval process to formalize the characterization and to ensure the waste has an appropriate disposal path. The waste generator is responsible for providing initial process knowledge information, and approves the IGD before it routed to chemical and radiological waste characterization professionals. This is the standard characterization process for LLNL-generated HEPA Filters.« less
Filter Feeding, Chaotic Filtration, and a Blinking Stokeslet
NASA Astrophysics Data System (ADS)
Blake, J. R.; Otto, S. R.; Blake, D. A.
The filtering mechanisms in bivalve molluscs, such as the mussel Mytilus edulis, and in sessile organisms, such as Vorticella or Stentor, involve complex fluid mechanical phenomena. In the former example, three different sets of cilia serving different functions are involved in the process whereas in the sessile organisms the flexibility and contractile nature of the stalk may play an important role in increasing the filtering efficiency of the organisms. In both cases, beating microscopic cilia are the ``engines'' driving the fluid motion, so the fluid mechanics will be dominated entirely by viscous forces. A fluid mechanical model is developed for the filtering mechanism in mussels that enables estimates to be made of the pressure drop through the gill filaments due to (i) latero-frontal filtering cilia, (ii) the lateral (pumping) cilia, and (iii) through the non-ciliated zone of the ventral end of the filament. The velocity profile across the filaments indicates that a backflow can occur in the centre of the channel leading to the formation of two ``standing'' eddies which may drive particles towards the mucus-laden short cilia, the third set of cilia. Filter feeding in the sessile organisms is modelled by a point force above a rigid boundary. The point force periodically changes its point of application according to a given protocol (a blinking stokeslet). The resulting fluid field is illustrated via Poincaré sections and particle dispersion-showing the potential for a much improved filtering efficiency. Returning to filter feeding in bivalve molluscs, this concept is extended to a pair of blinking stokeslets above a rigid boundary to give insight into possible mechanisms for movement of food particles onto the short mucus-bearing cilia. The appendix contains a Latin and English version of an ``Ode of Achievement'' in celebration of Sir James Lighthill's contributions to mathematics and fluid mechanics.
NASA Astrophysics Data System (ADS)
Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing
2009-07-01
Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
On the Relation Between Facular Bright Points and the Magnetic Field
NASA Astrophysics Data System (ADS)
Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran
1994-12-01
Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross--comparisons compiled from measurements of over two-thousand bright point structures are presented. Preliminary analysis of the time evolution of bright points in the G--band reveals that the dominant mode of bright point evolution is fission of larger structures into smaller ones and fusion of small structures into conglomerate structures. The characteristic time scale for the fission/fusion process is on the order of minutes.
Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.
2004-01-01
This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
Method and apparatus for filtering visual documents
NASA Technical Reports Server (NTRS)
Rorvig, Mark E. (Inventor); Shelton, Robert O. (Inventor)
1993-01-01
A method and apparatus for producing an abstract or condensed version of a visual document is presented. The frames comprising the visual document are first sampled to reduce the number of frames required for processing. The frames are then subjected to a structural decomposition process that reduces all information in each frame to a set of values. These values are in turn normalized and further combined to produce only one information content value per frame. The information content values of these frames are then compared to a selected distribution cutoff point. This effectively selects those values at the tails of a normal distribution, thus filtering key frames from their surrounding frames. The value for each frame is then compared with the value from the previous frame, and the respective frame is finally stored only if the values are significantly different. The method filters or compresses a visual document with a reduction in digital storage on the ratio of up to 700 to 1 or more, depending on the content of the visual document being filtered.
Design of High Quality Chemical XOR Gates with Noise Reduction.
Wood, Mackenna L; Domanskyi, Sergii; Privman, Vladimir
2017-07-05
We describe a chemical XOR gate design that realizes gate-response function with filtering properties. Such gate-response function is flat (has small gradients) at and in the vicinity of all the four binary-input logic points, resulting in analog noise suppression. The gate functioning involves cross-reaction of the inputs represented by pairs of chemicals to produce a practically zero output when both are present and nearly equal. This cross-reaction processing step is also designed to result in filtering at low output intensities by canceling out the inputs if one of the latter has low intensity compared with the other. The remaining inputs, which were not reacted away, are processed to produce the output XOR signal by chemical steps that result in filtering at large output signal intensities. We analyze the tradeoff resulting from filtering, which involves loss of signal intensity. We also discuss practical aspects of realizations of such XOR gates. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Point process analysis of noise in early invertebrate vision
Vinnicombe, Glenn
2017-01-01
Noise is a prevalent and sometimes even dominant aspect of many biological processes. While many natural systems have adapted to attenuate or even usefully integrate noise, the variability it introduces often still delimits the achievable precision across biological functions. This is particularly so for visual phototransduction, the process responsible for converting photons of light into usable electrical signals (quantum bumps). Here, randomness of both the photon inputs (regarded as extrinsic noise) and the conversion process (intrinsic noise) are seen as two distinct, independent and significant limitations on visual reliability. Past research has attempted to quantify the relative effects of these noise sources by using approximate methods that do not fully account for the discrete, point process and time ordered nature of the problem. As a result the conclusions drawn from these different approaches have led to inconsistent expositions of phototransduction noise performance. This paper provides a fresh and complete analysis of the relative impact of intrinsic and extrinsic noise in invertebrate phototransduction using minimum mean squared error reconstruction techniques based on Bayesian point process (Snyder) filters. An integrate-fire based algorithm is developed to reliably estimate photon times from quantum bumps and Snyder filters are then used to causally estimate random light intensities both at the front and back end of the phototransduction cascade. Comparison of these estimates reveals that the dominant noise source transitions from extrinsic to intrinsic as light intensity increases. By extending the filtering techniques to account for delays, it is further found that among the intrinsic noise components, which include bump latency (mean delay and jitter) and shape (amplitude and width) variance, it is the mean delay that is critical to noise performance. As the timeliness of visual information is important for real-time action, this delay could potentially limit the speed at which invertebrates can respond to stimuli. Consequently, if one wants to increase visual fidelity, reducing the photoconversion lag is much more important than improving the regularity of the electrical signal. PMID:29077703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratiomore » (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.« less
Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor
Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki
2015-01-01
This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760
NASA Astrophysics Data System (ADS)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.
Zhenwei Miao; Xudong Jiang; Kim-Hui Yap
2016-01-01
The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.
NASA Astrophysics Data System (ADS)
Ma, Hongchao; Cai, Zhan; Zhang, Liang
2018-01-01
This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.
NASA Astrophysics Data System (ADS)
Fangxiong, Chen; Min, Lin; Heping, Ma; Hailong, Jia; Yin, Shi; Forster, Dai
2009-08-01
An asymmetric MOSFET-C band-pass filter (BPF) with on chip charge pump auto-tuning is presented. It is implemented in UMC (United Manufacturing Corporation) 0.18 μm CMOS process technology. The filter system with auto-tuning uses a master-slave technique for continuous tuning in which the charge pump outputs 2.663 V, much higher than the power supply voltage, to improve the linearity of the filter. The main filter with third order low-pass and second order high-pass properties is an asymmetric band-pass filter with bandwidth of 2.730-5.340 MHz. The in-band third order harmonic input intercept point (IIP3) is 16.621 dBm, with 50 Ω as the source impedance. The input referred noise is about 47.455 μVrms. The main filter dissipates 3.528 mW while the auto-tuning system dissipates 2.412 mW from a 1.8 V power supply. The filter with the auto-tuning system occupies 0.592 mm2 and it can be utilized in GPS (global positioning system) and Bluetooth systems.
Franklin, Robert G; Adams, Reginald B; Steiner, Troy G; Zebrowitz, Leslie A
2018-05-14
Through 3 studies, we investigated whether angularity and roundness present in faces contributes to the perception of anger and joyful expressions, respectively. First, in Study 1 we found that angry expressions naturally contain more inward-pointing lines, whereas joyful expressions contain more outward-pointing lines. Then, using image-processing techniques in Studies 2 and 3, we filtered images to contain only inward-pointing or outward-pointing lines as a way to approximate angularity and roundness. We found that filtering images to be more angular increased how threatening and angry a neutral face was rated, increased how intense angry expressions were rated, and enhanced the recognition of anger. Conversely, filtering images to be rounder increased how warm and joyful a neutral face was rated, increased the intensity of joyful expressions, and enhanced recognition of joy. Together these findings show that angularity and roundness play a direct role in the recognition of angry and joyful expressions. Given evidence that angularity and roundness may play a biological role in indicating threat and safety in the environment, this suggests that angularity and roundness represent primitive facial cues used to signal threat-anger and warmth-joy pairings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kalman Filters for UXO Detection: Real-Time Feedback and Small Target Detection
2012-05-01
last two decades. Accomplishments reported from both hardware and software point of views have moved the re- search focus from simple laboratory tests...quality data which in turn require a good positioning of the sensors atop the UXOs. The data collection protocol is currently based on a two-stage process...Note that this results is merely an illustration of the convergence of the Kalman filter. In practise , the linear part can be directly inverted for if
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Unified dead-time compensation structure for SISO processes with multiple dead times.
Normey-Rico, Julio E; Flesch, Rodolfo C C; Santos, Tito L M
2014-11-01
This paper proposes a dead-time compensation structure for processes with multiple dead times. The controller is based on the filtered Smith predictor (FSP) dead-time compensator structure and it is able to control stable, integrating, and unstable processes with multiple input/output dead times. An equivalent model of the process is first computed in order to define the predictor structure. Using this equivalent model, the primary controller and the predictor filter are tuned to obtain an internally stable closed-loop system which also attempts some closed-loop specifications in terms of set-point tracking, disturbance rejection, and robustness. Some simulation case studies are used to illustrate the good properties of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Infrared image background modeling based on improved Susan filtering
NASA Astrophysics Data System (ADS)
Yuehua, Xia
2018-02-01
When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.
An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter
NASA Astrophysics Data System (ADS)
Chang, M.; Kang, Z.
2017-09-01
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
Multiple point least squares equalization in a room
NASA Technical Reports Server (NTRS)
Elliott, S. J.; Nelson, P. A.
1988-01-01
Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.
Porous filtering media comparison through wet and dry sampling of fixed bed gasification products
NASA Astrophysics Data System (ADS)
Allesina, G.; Pedrazzi, S.; Montermini, L.; Giorgini, L.; Bortolani, G.; Tartarini, P.
2014-11-01
The syngas produced by fixed bed gasifiers contains high quantities of particulate and tars. This issue, together with its high temperature, avoids its direct exploitation without a proper cleaning and cooling process. In fact, when the syngas produced by gasification is used in an Internal Combustion engine (IC), the higher the content of tars and particulate, the higher the risk to damage the engine is. If these compounds are not properly removed, the engine may fail to run. A way to avoid engine fails is to intensify the maintenance schedule, but these stops will reduce the system profitability. From a clean syngas does not only follow higher performance of the generator, but also less pollutants in the atmosphere. When is not possible to work on the gasification reactions, the filter plays the most important role in the engine safeguard process. This work is aimed at developing and comparing different porous filters for biomass gasifiers power plants. A drum filter was developed and tested filling it with different filtering media available on the market. As a starting point, the filter was implemented in a Power Pallet 10 kW gasifier produced by the California-based company "ALL Power Labs". The original filter was replaced with different porous biomasses, such as woodchips and corn cobs. Finally, a synthetic zeolites medium was tested and compared with the biological media previously used. The Tar Sampling Protocol (TSP) and a modified "dry" method using the Silica Gel material were applied to evaluate the tars, particulate and water amount in the syngas after the filtration process. Advantages and disadvantages of every filtering media chosen were reported and discussed.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme
NASA Astrophysics Data System (ADS)
Hsin, Cheng-Ho; Inigo, Rafael M.
1990-03-01
The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.
Initial flight results of the TRMM Kalman filter
NASA Technical Reports Server (NTRS)
Andrews, Stephen F.; Morgenstern, Wendy M.
1998-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls attitude based on the Earth Sensor Assembly (ESA) output. After a potential single point failure in the ESA was identified, the contingency attitude determination method chosen to backup the ESA-based system was a sixth-order extended Kalman filter that uses magnetometer and digital sun sensor measurements. A brief description of the TRMM Kalman filter will be given, including some implementation issues and algorithm heritage. Operational aspects of the Kalman filter and some failure detection and correction will be described. The Kalman filter was tested in a sun pointing attitude and in a nadir pointing attitude during the in-orbit checkout period, and results from those tests will be presented. This paper will describe some lessons learned from the experience of the TRMM team.
Initial Flight Results of the TRMM Kalman Filter
NASA Technical Reports Server (NTRS)
Andrews, Stephen F.; Morgenstern, Wendy M.
1998-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls attitude based on the Earth Sensor Assembly (ESA) output. After a potential single point failure in the ESA was identified, the contingency attitude determination method chosen to backup the ESA-based system was a sixth-order extended Kalman filter that uses magnetometer and digital sun sensor measurements. A brief description of the TRMM Kalman filter will be given, including some implementation issues and algorithm heritage. Operational aspects of the Kalman filter and some failure detection and correction will be described. The Kalman filter was tested in a sun pointing attitude and in a nadir pointing attitude during the in-orbit checkout period, and results from those tests will be presented. This paper will describe some lessons learned from the experience of the TRMM team.
NASA Astrophysics Data System (ADS)
Xiong, L.; Wang, G.; Wessel, P.
2017-12-01
Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3cm×3cm) to handprint (e.g., 10cm×10cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain portable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem and implement an anti-aliasing procedure of regridding dense TLS data. The TLS data collected in the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as anti-aliasing filters. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.
NASA Astrophysics Data System (ADS)
Xiong, Lin.; Wang, Guoquan; Wessel, Paul
2017-03-01
Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.
Angular displacement measuring device
NASA Technical Reports Server (NTRS)
Seegmiller, H. Lee B. (Inventor)
1992-01-01
A system for measuring the angular displacement of a point of interest on a structure, such as aircraft model within a wind tunnel, includes a source of polarized light located at the point of interest. A remote detector arrangement detects the orientation of the plane of the polarized light received from the source and compares this orientation with the initial orientation to determine the amount or rate of angular displacement of the point of interest. The detector arrangement comprises a rotating polarizing filter and a dual filter and light detector unit. The latter unit comprises an inner aligned filter and photodetector assembly which is disposed relative to the periphery of the polarizer so as to receive polarized light passing the polarizing filter and an outer aligned filter and photodetector assembly which receives the polarized light directly, i.e., without passing through the polarizing filter. The purpose of the unit is to compensate for the effects of dust, fog and the like. A polarization preserving optical fiber conducts polarized light from a remote laser source to the point of interest.
Position Estimation Using Image Derivative
NASA Technical Reports Server (NTRS)
Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato
2015-01-01
This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.
NASA Astrophysics Data System (ADS)
Koch, R.; May, S.; Nüchter, A.
2017-02-01
3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It is more reliable in 3D as in 2D. Nevertheless, collect the data of multiple scans and post-filter them as soon as the object was bypassed should pursued. This is why future work concentrates on implementing a post-filter module. Besides, it is the aim to improve the discrimination between specular reflective and transparent objects.
An improved three-dimension reconstruction method based on guided filter and Delaunay
NASA Astrophysics Data System (ADS)
Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.
Automatic x-ray image contrast enhancement based on parameter auto-optimization.
Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan
2017-11-01
Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Tondera, Katharina; Koenen, Stefan; Pinnekamp, Johannes
2013-01-01
A main source of surface water pollution in Western Europe stems from combined sewer overflow. One of the few technologies available to reduce this pollution is the retention soil filter. In this research project, we evaluated the cleaning efficiency of retention soil filters measuring the concentration ratio of standard wastewater parameters and bacteria according to factors limiting efficiency, such as long dry phases or phases of long-lasting retention. Furthermore, we conducted an initial investigation on how well retention soil filters reduce certain micropollutants on large-scale plants. There was little precipitation during the 1-year sampling phase, which led to fewer samples than expected. Nevertheless, we could verify how efficiently retention soil filters clean total suspended solids. Our results show that retention soil filters are not only able to eliminate bacteria, but also to retain some of the micropollutants investigated here. As the filters were able to reduce diclofenac, bisphenol A and metoprolol by a median rate of almost 75%, we think that further investigations should be made into the reduction processes in the filter. At this point, a higher accuracy in the results could be achieved by conducting bench-scale experiments.
Wójcik, Paweł; Adamowski, Janusz
2017-01-01
The spin filtering effect in the bilayer nanowire with quantum point contact is investigated theoretically. We demonstrate the new mechanism of the spin filtering based on the lateral inter-subband spin-orbit coupling, which for the bilayer nanowires has been reported to be strong. The proposed spin filtering effect is explained as the joint effect of the Landau-Zener intersubband transitions caused by the hybridization of states with opposite spin (due to the lateral Rashba SO interaction) and the confinement of carriers in the quantum point contact region. PMID:28358141
1979-12-13
I III| I IIG since t - T < T = t - L (W - i) _ t is a point of stationary phase. -2a o 1 Eq. (4.86) shows that the output of the chirp filter is the...Parametric Interactions in Delay-Line Devices", IEEE Transactions on Microwave Theory and Techniques, vol. MTT-21, no. 4, April 1973, pp. 2414-257...443-476. Martin, T. A., "The IMCON Pulse Compression Filter and its Applications", IE3 Transactions on Microwave Theory and Techniques, vol. 21
Spectrum of classes of point emitters of electromagnetic wave fields.
Castañeda, Román
2016-09-01
The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.
Integration of multi-sensor data to measure soil surface changes
NASA Astrophysics Data System (ADS)
Eltner, Anette; Schneider, Danilo
2016-04-01
Digital elevation models (DEM) of high resolution and accuracy covering a suitable sized area of interest can be a promising approach to help understanding the processes of soil erosion. Thereby, the plot under investigation should remain undisturbed. The fragile marl landscape in Andalusia (Spain) is especially prone to soil detachment and transport with unique sediment connectivity characteristics due to the soil properties and climatic conditions. A 600 m² field plot is established and monitored during three field campaigns (Sep. 2013, Nov. 2013 and Feb. 2014). Unmanned aerial vehicle (UAV) photogrammetry and terrestrial laser scanning (TLS) are suitable tools to generate high resolution topography data that describe soil surface changes at large field plots. Thereby, the advantages of both methods are utilised in a synergetic manner. On the one hand, TLS data is assumed to comprise a higher reliability regarding consistent error behaviour than DEMs derived from overlapping UAV images. Therefore, global errors (e.g. dome effect) and local errors (e.g. DEM blunders due to erroneous image matching) within the UAV data are assessed with the DEMs produced by TLS. Furthermore, TLS point clouds allow for fast and reliable filtering of vegetation spots, which is not as straightforward within the UAV data due to known image matching problems in areas displaying plant cover. On the other hand, systematic DEM errors linked to TLS are detected and possibly corrected utilising the DEMs reconstructed from overlapping UAV images. Furthermore, TLS point clouds are filtered corresponding to the degree of point quality, which is estimated from parameters of the scan geometry (i.e. incidence angle and footprint size). This is especially relevant for this study because the area of interest is located at gentle hillslopes that are prone to soil erosion. Thus, the view of the scanning device onto the surface results in an adverse angle, which is solely slightly improved by the usage of a 4 m high tripod. Surface roughness is considered as a further parameter to evaluate the TLS point quality. The filtering tool allows for choosing each data point either from the TLS or UAV data corresponding to the data acquisition geometry and surface properties. The filtered points are merged into one point cloud, which is finally processed to reduce remaining data noise. DEM analysis reveals a continuous decrease of soil surface roughness after tillage, the reappearance of former wheel tracks and local patterns of erosion as well as accumulation.
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
NASA Astrophysics Data System (ADS)
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
Kalman Filter for Calibrating a Telescope Focal Plane
NASA Technical Reports Server (NTRS)
Kang, Bryan; Bayard, David
2006-01-01
The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
A combinatorial filtering method for magnetotelluric time-series based on Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2014-11-01
Magnetotelluric (MT) time-series are often contaminated with noise from natural or man-made processes. A substantial improvement is possible when the time-series are presented as clean as possible for further processing. A combinatorial method is described for filtering of MT time-series based on the Hilbert-Huang transform that requires a minimum of human intervention and leaves good data sections unchanged. Good data sections are preserved because after empirical mode decomposition the data are analysed through hierarchies, morphological filtering, adaptive threshold and multi-point smoothing, allowing separation of noise from signals. The combinatorial method can be carried out without any assumption about the data distribution. Simulated data and the real measured MT time-series from three different regions, with noise caused by baseline drift, high frequency noise and power-line contribution, are processed to demonstrate the application of the proposed method. Results highlight the ability of the combinatorial method to pick out useful signals, and the noise is suppressed greatly so that their deleterious influence is eliminated for the MT transfer function estimation.
... the following measures: Use a point-of-use filter Consider using point-of-use (per- 2 Bottled ... tap sonal use, end-of-tap, under sink) filters that remove particles one micrometer or less in ...
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
Ghost suppression in image restoration filtering
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1975-01-01
An optimum image restoration filter is described in which provision is made to constrain the spatial extent of the restoration function, the noise level of the filter output and the rate of falloff of the composite system point-spread away from the origin. Experimental results show that sidelobes on the composite system point-spread function produce ghosts in the restored image near discontinuities in intensity level. By redetermining the filter using a penalty function that is zero over the main lobe of the composite point-spread function of the optimum filter and nonzero where the point-spread function departs from a smoothly decaying function in the sidelobe region, a great reduction in sidelobe level is obtained. Almost no loss in resolving power of the composite system results from this procedure. By iteratively carrying out the same procedure even further reductions in sidelobe level are obtained. Examples of original and iterated restoration functions are shown along with their effects on a test image.
A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.
1991-01-01
A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Passive thermo-optic feedback for robust athermal photonic systems
Rakich, Peter T.; Watts, Michael R.; Nielson, Gregory N.
2015-06-23
Thermal control devices, photonic systems and methods of stabilizing a temperature of a photonic system are provided. A thermal control device thermally coupled to a substrate includes a waveguide for receiving light, an absorption element optically coupled to the waveguide for converting the received light to heat and an optical filter. The optical filter is optically coupled to the waveguide and thermally coupled to the absorption element. An operating point of the optical filter is tuned responsive to the heat from the absorption element. When the operating point is less than a predetermined temperature, the received light is passed to the absorption element via the optical filter. When the operating point is greater than or equal to the predetermined temperature, the received light is transmitted out of the thermal control device via the optical filter, without being passed to the absorption element.
NASA Astrophysics Data System (ADS)
Zaldívar Huerta, Ignacio E.; Pérez Montaña, Diego F.; Nava, Pablo Hernández; Juárez, Alejandro García; Asomoza, Jorge Rodríguez; Leal Cruz, Ana L.
2013-12-01
We experimentally demonstrate the use of an electro-optical transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz. The frequency response of the microwave photonic filter consists of four band-pass windows centered at frequencies that can be tailored to the function of the spectral free range of the optical source, the chromatic dispersion parameter of the optical fiber used, as well as the length of the optical link. In particular, filtering effect is obtained by the interaction of an externally modulated multimode laser diode emitting at 1.5 μm associated to the length of a dispersive optical fiber. Filtered microwave signals are used as electrical carriers to transmit TV-signal over long-haul optical links point-to-point. Transmission of TV-signal coded on the microwave band-pass windows located at 4.62, 6.86, 4.0 and 6.0 GHz are achieved over optical links of 25.25 km and 28.25 km, respectively. Practical applications for this approach lie in the field of the FTTH access network for distribution of services as video, voice, and data.
Sustainable colloidal-silver-impregnated ceramic filter for point-of-use water treatment.
Oyanedel-Craver, Vinka A; Smith, James A
2008-02-01
Cylindrical colloidal-silver-impregnated ceramic filters for household (point-of-use) water treatment were manufactured and tested for performance in the laboratory with respect to flow rate and bacteria transport. Filters were manufactured by combining clay-rich soil with water, grog (previously fired clay), and flour, pressing them into cylinders, and firing them at 900 degrees C for 8 h. The pore-size distribution of the resulting ceramic filters was quantified by mercury porosimetry. Colloidal silver was applied to filters in different quantities and ways (dipping and painting). Filters were also tested without any colloidal-silver application. Hydraulic conductivity of the filters was quantified using changing-head permeability tests. [3H]H2O water was used as a conservative tracer to quantify advection velocities and the coefficient of hydrodynamic dispersion. Escherichia coli (E. coli) was used to quantify bacterial transport through the filters. Hydraulic conductivity and pore-size distribution varied with filter composition; hydraulic conductivities were on the order of 10(-5) cm/s and more than 50% of the pores for each filter had diameters ranging from 0.02 to 15 microm. The filters removed between 97.8% and 100% of the applied bacteria; colloidal-silver treatments improved filter performance, presumably by deactivation of bacteria. The quantity of colloidal silver applied per filter was more important to bacteria removal than the method of application. Silver concentrations in effluent filter water were initially greater than 0.1 mg/L, but dropped below this value after 200 min of continuous operation. These results indicate that colloidal-silver-impregnated ceramic filters, which can be made using primarily local materials and labor, show promise as an effective and sustainable point-of-use water treatment technology for the world's poorest communities.
Indoor A* Pathfinding Through an Octree Representation of a Point Cloud
NASA Astrophysics Data System (ADS)
Rodenberg, O. B. P. M.; Verbree, E.; Zlatanova, S.
2016-10-01
There is a growing demand of 3D indoor pathfinding applications. Researched in the field of robotics during the last decades of the 20th century, these methods focussed on 2D navigation. Nowadays we would like to have the ability to help people navigate inside buildings or send a drone inside a building when this is too dangerous for people. What these examples have in common is that an object with a certain geometry needs to find an optimal collision free path between a start and goal point. This paper presents a new workflow for pathfinding through an octree representation of a point cloud. We applied the following steps: 1) the point cloud is processed so it fits best in an octree; 2) during the octree generation the interior empty nodes are filtered and further processed; 3) for each interior empty node the distance to the closest occupied node directly under it is computed; 4) a network graph is computed for all empty nodes; 5) the A* pathfinding algorithm is conducted. This workflow takes into account the connectivity for each node to all possible neighbours (face, edge and vertex and all sizes). Besides, a collision avoidance system is pre-processed in two steps: first, the clearance of each empty node is computed, and then the maximal crossing value between two empty neighbouring nodes is computed. The clearance is used to select interior empty nodes of appropriate size and the maximal crossing value is used to filter the network graph. Finally, both these datasets are used in A* pathfinding.
Design and implementation of a hybrid sub-band acoustic echo canceller (AEC)
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Yang, Cheng-Ken; Hur, Ker-Nan
2009-04-01
An efficient method is presented for implementing an acoustic echo canceller (AEC) that makes use of hybrid sub-band approach. The hybrid system is comprised of a fixed processor and an adaptive filter in each sub-band. The AEC aims at reducing the echo resulting from the acoustic feedback in loudspeaker-enclosure-microphone (LEM) systems such as teleconferencing and hands-free systems. In order to cancel the acoustical echo efficiently, various processing architectures including fixed filters, hybrid processors, and sub-band structure are investigated. A double-talk detector is incorporated into the proposed AEC to prevent the adaptive filter from diverging in double-talk situations. A de-correlation filter is also used alongside sub-band processing in order to enhance the performance and efficiency of AEC. All algorithms are implemented and verified on the platform of a fixed-point digital signal processor (DSP). The AECs are evaluated in terms of cancellation performance and computation complexity. In addition, listening tests are conducted to assess the subjective performance of the AECs. From the results, the proposed hybrid sub-band AEC was found to be the most effective among all methods in terms of echo reduction and timbral quality.
High accuracy position method based on computer vision and error analysis
NASA Astrophysics Data System (ADS)
Chen, Shihao; Shi, Zhongke
2003-09-01
The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.
Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-01-01
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340
Data processing and quality evaluation of a boat-based mobile laser scanning system.
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-09-17
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.
Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy
2018-01-01
Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.
GEOS 3 data processing for the recovery of geoid undulations and gravity anomalies
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1979-01-01
The paper discusses the analysis of GEOS 3 altimeter data for the determination of geoid heights and point and mean gravity anomalies. Methods are presented for determining the mean anomalies and mean undulations from the GEOS 3 altimeter data available by the end of September 1977 without having a complete set of precise orbits. The editing of the data is extensive to remove questionable data, although no filtering of the data is carried out. An adjustment process is carried out to eliminate orbit error and altimeter bias. Representative point anomaly values are computed to investigate anomaly behavior across the Bonin Trench and over the Patton seamounts.
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
[Continuum based fast Fourier transform processing of infrared spectrum].
Liu, Qing-Jie; Lin, Qi-Zhong; Wang, Qin-Jun; Li, Hui; Li, Shuai
2009-12-01
To recognize ground objects with infrared spectrum, high frequency noise removing is one of the most important phases in spectrum feature analysis and extraction. A new method for infrared spectrum preprocessing was given combining spectrum continuum processing and Fast Fourier Transform (CFFT). Continuum was firstly removed from the noise polluted infrared spectrum to standardize hyper-spectra. Then the spectrum was transformed into frequency domain (FD) with fast Fourier transform (FFT), separating noise information from target information After noise eliminating from useful information with a low-pass filter, the filtered FD spectrum was transformed into time domain (TD) with fast Fourier inverse transform. Finally the continuum was recovered to the spectrum, and the filtered infrared spectrum was achieved. Experiment was performed for chlorite spectrum in USGS polluted with two kinds of simulated white noise to validate the filtering ability of CFFT by contrast with cubic function of five point (CFFP) in time domain and traditional FFT in frequency domain. A circle of CFFP has limited filtering effect, so it should work much with more circles and consume more time to achieve better filtering result. As for conventional FFT, Gibbs phenomenon has great effect on preprocessing result at edge bands because of special character of rock or mineral spectra, while works well at middle bands. Mean squared error of CFFT is 0. 000 012 336 with cut-off frequency of 150, while that of FFT and CFFP is 0. 000 061 074 with cut-off frequency of 150 and 0.000 022 963 with 150 working circles respectively. Besides the filtering result of CFFT can be improved by adjusting the filter cut-off frequency, and has little effect on working time. The CFFT method overcomes the Gibbs problem of FFT in spectrum filtering, and can be more convenient, dependable, and effective than traditional TD filter methods.
Optimum constrained image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1974-01-01
The filter was developed in Hilbert space by minimizing the radius of gyration of the overall or composite system point-spread function subject to constraints on the radius of gyration of the restoration filter point-spread function, the total noise power in the restored image, and the shape of the composite system frequency spectrum. An iterative technique is introduced which alters the shape of the optimum composite system point-spread function, producing a suboptimal restoration filter which suppresses undesirable secondary oscillations. Finally this technique is applied to multispectral scanner data obtained from the Earth Resources Technology Satellite to provide resolution enhancement. An experimental approach to the problems involving estimation of the effective scanner aperture and matching the ERTS data to available restoration functions is presented.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
NASA Astrophysics Data System (ADS)
Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.
Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun
2013-12-01
In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)
1999-01-01
A cable tester is described for low frequency testing of a cable for faults. The tester allows for testing a cable beyond a point where a signal conditioner is installed, minimizing the number of connections which have to be disconnected. A magnetic pickup coil is described for detecting a test signal injected into the cable. A narrow bandpass filter is described for increasing detection of the test signal. The bandpass filter reduces noise so that a high gain amplifier provided for detecting a test signal is not completely saturate by noise. To further increase the accuracy of the cable tester, processing gain is achieved by comparing the signal from the amplifier with at least one reference signal emulating the low frequency input signal injected into the cable. Different processing techniques are described evaluating a detected signal.
NASA Astrophysics Data System (ADS)
Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.
2005-03-01
A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.
Estimating Thruster Impulses From IMU and Doppler Data
NASA Technical Reports Server (NTRS)
Lisano, Michael E.; Kruizinga, Gerhard L.
2009-01-01
A computer program implements a thrust impulse measurement (TIM) filter, which processes data on changes in velocity and attitude of a spacecraft to estimate the small impulsive forces and torques exerted by the thrusters of the spacecraft reaction control system (RCS). The velocity-change data are obtained from line-of-sight-velocity data from Doppler measurements made from the Earth. The attitude-change data are the telemetered from an inertial measurement unit (IMU) aboard the spacecraft. The TIM filter estimates the threeaxis thrust vector for each RCS thruster, thereby enabling reduction of cumulative navigation error attributable to inaccurate prediction of thrust vectors. The filter has been augmented with a simple mathematical model to compensate for large temperature fluctuations in the spacecraft thruster catalyst bed in order to estimate thrust more accurately at deadbanding cold-firing levels. Also, rigorous consider-covariance estimation is applied in the TIM to account for the expected uncertainty in the moment of inertia and the location of the center of gravity of the spacecraft. The TIM filter was built with, and depends upon, a sigma-point consider-filter algorithm implemented in a Python-language computer program.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
Yoon, Yeojoon; Jung, Youmi; Kwon, Minhwan; Cho, Eunha; Kang, Joon-Wun
2013-01-01
Abstract Effects of various electrodes and prefiltration to minimize disinfection byproducts (DBPs) in electrochemical water disinfection was evaluated. The target microorganism, Escherichia coli O157:H7, was effectively inactivated even applying a solar-charged storage battery for the electrolysis process. Extent of microbial inactivation decreased with lower water temperature and higher pH in the free chlorine disinfection system. The RuO2/Ti electrode was most efficient because it produced the lowest concentration of chlorate and the highest generation of free chlorine. Prefiltration using a ceramic filter inhibited formation of halogenated DBPs because it removed precursors of DBPs. For safe point-of-use water treatment, the use of a hybrid prefiltration stage with the electrolysis system is strongly recommended to reduce risks from DBPs. The system is particularly suited to use in developing regions. PMID:24381482
Particulate removal processes and hydraulics of porous gravel media filters
NASA Astrophysics Data System (ADS)
Minto, J. M.; Phoenix, V. R.; Dorea, C. C.; Haynes, H.; Sloan, W. T.
2013-12-01
Sustainable urban Drainage Systems (SuDS) are rapidly gaining acceptance as a low-cost tool for treating urban runoff pollutants close to source. Road runoff water in particular requires treatment due to the presence of high levels of suspended particles and heavy metals adsorbed to these particles. The aim of this research is to elucidate the particle removal processes that occur within gravel filters that have so far been considered as 'black-box' systems. Based on these findings, a better understanding will be attained on what influences gravel filter removal efficiency and how this changes throughout their design life; leading to a more rational design of this useful technology. This has been achieved by tying together three disparate research elements: tracer residence time distribution curves of filters during clogging; 3D magnetic resonance imaging (MRI) of clogging filters and computational fluid dynamics (CFD) modelling of complex filter pore networks. This research relates column average changes in particle removal efficiency and tracer residence time distributions (RTDs) due to clogging with non-invasive measurement of the spatial variability in particle deposition. The CFD modelling provides a link between observed deposition patterns, flow velocities and wall shear stresses as well as the explanations for the change in RTD with clogging and the effect on particle transport. Results show that, as a filter clogs, particles take a longer, more tortuous path through the filter. This is offset by a reduction in filter volume resulting in higher flow velocities and more rapid particle transport. Higher velocities result in higher shear stresses and the development of preferential pathways in which the velocity exceeds the deposition threshold and the overall efficiency of the filter decreases. Initial pore geometry is linked to the pattern of deposition and subsequent formation of preferential pathways. These results shed light on the 'black-box' internal clogging processes of gravel filters and are a considerable improvement on the inflow/outflow data most often available to monitor removal efficiency and clogging. Sub-section of the MRI derived geometry showing gravel (grey), pore space (blue), deposited particles (red) for 1) prior to clogging and 2) after clogging. The pore network skeleton (green) provided a reference point for comparing pore diameter change with clogging.
Micro- and Nano- Porous Adsorptive Materials for Removal of Contaminants from Water at Point-of-Use
NASA Astrophysics Data System (ADS)
Yakub, Ismaiel
Water is food, a basic human need and a fundamental human right, yet hundreds of millions of people around the world do not have access to clean drinking water. As a result, about 5000 people die each day from preventable water borne diseases. This dissertation presents the results of experimental and theoretical studies on three different types of porous materials that were developed for the removal of contaminants from water at point of use (household level). First, three compositionally distinct porous ceramic water filters (CWFs) were made from a mixture of redart clay and sieved woodchips and processed into frustum shape. The filters were tested for their flow characteristics and bacteria filtration efficiencies. Since, the CWFs are made from brittle materials, and may fail during processing, transportation and usage, the mechanical and physical properties of the porous clays were characterized, and used in modeling designed to provide new insights for the design of filter geometries. The mechanical/physical properties that were characterized include: compressive strength, flexural strength, facture toughness and resistance curve behavior, keeping in mind the anisotropic nature of the filter structure. The measured flow characteristics and mechanical/physical properties were then related to the underlying porosity and characteristic pore size. In an effort to quantify the adhesive interactions associated with filtration phenomena, atomic force microscopy (AFM) was used to measure the adhesion between bi-material pairs that are relevant to point-of-use ceramic water filters. The force microscopy measurements of pull-off force and adhesion energy were used to rank the adhesive interactions. Similarly, the adsorption of fluoride to hydroxyapatite-doped redart clay was studied using composites of redart clay and hydroxyapatite (C-HA). The removal of fluoride from water was explored by carrying out adsorption experiments on C-HA adsorbents with different ratios of clay to hydroxyapatite (and sintered at different temperatures). The overall adsorption was controlled using water with varying fluoride concentrations and adsorbent-adsorbate contact times. Prototype frustum-shaped C-HA filters were then fabricated and shown to remove both fluoride and E.coli bacteria from water. Finally, "buckyweb", which is a foam comprising carbon nanotubes and graphene was made via thermal ablation of graphite, and tested for its deflouridation capacity. Defluoridation was studied in terms of concentration of fluoride, contact time and pH. The structure and adsorption characteristics of buckyweb foams were elucidated via energy dispersive x-ray spectroscopy, transmission electron microscopy and scanning transmission electron microscopy. The implications of the results were then explored for potential applications in water filtration.
Mullett, Mark; Fornarelli, Roberta; Ralph, David
2014-01-01
Two nanofiltration membranes, a Dow NF 270 polyamide thin film and a TriSep TS 80 polyamide thin film, were investigated for their retention of ionic species when filtering mine influenced water streams at a range of acidic pH values. The functional iso-electric point of the membranes, characterized by changes in retention over a small pH range, were examined by filtering solutions of sodium sulphate. Both membranes showed changes in retention at pH 3, suggesting a zero net charge on the membranes at this pH. Copper mine drainage and synthetic solutions of mine influenced water were filtered using the same membranes. These solutions were characterized by pH values within 2 and 5, thus crossing the iso-electric point of both membranes. Retention of cations was maximized when the feed solution pH was less than the iso-electric point of the membrane. In these conditions, the membrane has a net positive charge, reducing the transmission rate of cations. From the recoveries of a range of cations, the suitability of nanofiltration was discussed relative to the compliance with mine water discharge criteria and the recovery of valuable commodity metals. The nanofiltration process was demonstrated to offer advantages in metal recovery from mine waste streams, concomitantly enabling discharge criteria for the filtrate disposal to be met. PMID:24957170
A trust region approach with multivariate Padé model for optimal circuit design
NASA Astrophysics Data System (ADS)
Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.
2017-11-01
Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Space moving target detection and tracking method in complex background
NASA Astrophysics Data System (ADS)
Lv, Ping-Yue; Sun, Sheng-Li; Lin, Chang-Qing; Liu, Gao-Rui
2018-06-01
The background of the space-borne detectors in real space-based environment is extremely complex and the signal-to-clutter ratio is very low (SCR ≈ 1), which increases the difficulty for detecting space moving targets. In order to solve this problem, an algorithm combining background suppression processing based on two-dimensional least mean square filter (TDLMS) and target enhancement based on neighborhood gray-scale difference (GSD) is proposed in this paper. The latter can filter out most of the residual background clutter processed by the former such as cloud edge. Through this procedure, both global and local SCR have obtained substantial improvement, indicating that the target has been greatly enhanced. After removing the detector's inherent clutter region through connected domain processing, the image only contains the target point and the isolated noise, in which the isolated noise could be filtered out effectively through multi-frame association. The proposed algorithm in this paper has been compared with some state-of-the-art algorithms for moving target detection and tracking tasks. The experimental results show that the performance of this algorithm is the best in terms of SCR gain, background suppression factor (BSF) and detection results.
Microencapsulation and Electrostatic Processing Device
NASA Technical Reports Server (NTRS)
Morrison, Dennis R. (Inventor); Mosier, Benjamin (Inventor); Cassanto, John M. (Inventor)
2001-01-01
A microencapsulation and electrostatic processing (MEP) device is provided for forming microcapsules. In one embodiment, the device comprises a chamber having a filter which separates a first region in the chamber from a second region in the chamber. An aqueous solution is introduced into the first region through an inlet port, and a hydrocarbon/ polymer solution is introduced into the second region through another inlet port. The filter acts to stabilize the interface and suppress mixing between the two immiscible solutions as they are being introduced into their respective regions. After the solutions have been introduced and have become quiescent, the interface is gently separated from the filter. At this point, spontaneous formation of microcapsules at the interface may begin to occur, or some fluid motion may be provided to induce microcapsule formation. In any case, the fluid shear force at the interface is limited to less than 100 dynes/sq cm. This low-shear approach to microcapsule formation yields microcapsules with good sphericity and desirable size distribution. The MEP device is also capable of downstream processing of microcapsules, including rinsing, re-suspension in tertiary fluids, electrostatic deposition of ancillary coatings, and free-fluid electrophoretic separation of charged microcapsules.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
Micromechanical Signal Processors
NASA Astrophysics Data System (ADS)
Nguyen, Clark Tu-Cuong
Completely monolithic high-Q micromechanical signal processors constructed of polycrystalline silicon and integrated with CMOS electronics are described. The signal processors implemented include an oscillator, a bandpass filter, and a mixer + filter--all of which are components commonly required for up- and down-conversion in communication transmitters and receivers, and all of which take full advantage of the high Q of micromechanical resonators. Each signal processor is designed, fabricated, then studied with particular attention to the performance consequences associated with miniaturization of the high-Q element. The fabrication technology which realizes these components merges planar integrated circuit CMOS technologies with those of polysilicon surface micromachining. The technologies are merged in a modular fashion, where the CMOS is processed in the first module, the microstructures in a following separate module, and at no point in the process sequence are steps from each module intermixed. Although the advantages of such modularity include flexibility in accommodating new module technologies, the developed process constrained the CMOS metallization to a high temperature refractory metal (tungsten metallization with TiSi _2 contact barriers) and constrained the micromachining process to long-term temperatures below 835^circC. Rapid-thermal annealing (RTA) was used to relieve residual stress in the mechanical structures. To reduce the complexity involved with developing this merged process, capacitively transduced resonators are utilized. High-Q single resonator and spring-coupled micromechanical resonator filters are also investigated, with particular attention to noise performance, bandwidth control, and termination design. The noise in micromechanical filters is found to be fairly high due to poor electromechanical coupling on the micro-scale with present-day technologies. Solutions to this high series resistance problem are suggested, including smaller electrode-to-resonator gaps to increase the coupling capacitance. Active Q-control techniques are demonstrated which control the bandwidth of micromechanical filters and simulate filter terminations with little passband distortion. Noise analysis shows that these active techniques are relatively quiet when compared with other resistive techniques. Modulation techniques are investigated whereby a single resonator or a filter constructed from several such resonators can provide both a mixing and a filtering function, or a filtering and amplitude modulation function. These techniques center around the placement of a carrier signal on the micromechanical resonator. Finally, micro oven stabilization is investigated in an attempt to null the temperature coefficient of a polysilicon micromechanical resonator. Here, surface micromachining procedures are utilized to fabricate a polysilicon resonator on a microplatform--two levels of suspension--equipped with heater and temperature sensing resistors, which are then imbedded in a feedback loop to control the platform (and resonator) temperature. (Abstract shortened by UMI.).
Method of making a continuous ceramic fiber composite hot gas filter
Hill, Charles A.; Wagner, Richard A.; Komoroski, Ronald G.; Gunter, Greg A.; Barringer, Eric A.; Goettler, Richard W.
1999-01-01
A ceramic fiber composite structure particularly suitable for use as a hot gas cleanup ceramic fiber composite filter and method of making same from ceramic composite material has a structure which provides for increased strength and toughness in high temperature environments. The ceramic fiber composite structure or filter is made by a process in which a continuous ceramic fiber is intimately surrounded by discontinuous chopped ceramic fibers during manufacture to produce a ceramic fiber composite preform which is then bonded using various ceramic binders. The ceramic fiber composite preform is then fired to create a bond phase at the fiber contact points. Parameters such as fiber tension, spacing, and the relative proportions of the continuous ceramic fiber and chopped ceramic fibers can be varied as the continuous ceramic fiber and chopped ceramic fiber are simultaneously formed on the porous vacuum mandrel to obtain a desired distribution of the continuous ceramic fiber and the chopped ceramic fiber in the ceramic fiber composite structure or filter.
Giardia and Drinking Water from Private Wells
... boiling water is using a point-of-use filter. Not all home water filters remove Giardia . Filters that are designed to remove the parasite should ... learn more, visit CDC’s A Guide to Water Filters page. As you consider ways to disinfect your ...
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
NASA Astrophysics Data System (ADS)
Patrón, Verónica A.; Álvarez Borrego, Josué; Coronel Beltrán, Ángel
2015-09-01
Eye tracking has many useful applications that range from biometrics to face recognition and human-computer interaction. The analysis of the characteristics of the eyes has become one of the methods to accomplish the location of the eyes and the tracking of the point of gaze. Characteristics such as the contrast between the iris and the sclera, the shape, and distribution of colors and dark/light zones in the area are the starting point for these analyses. In this work, the focus will be on the contrast between the iris and the sclera, performing a correlation in the frequency domain. The images are acquired with an ordinary camera, which with were taken images of thirty-one volunteers. The reference image is an image of the subjects looking to a point in front of them at 0° angle. Then sequences of images are taken with the subject looking at different angles. These images are processed in MATLAB, obtaining the maximum correlation peak for each image, using two different filters. Each filter were analyzed and then one was selected, which is the filter that gives the best performance in terms of the utility of the data, which is displayed in graphs that shows the decay of the correlation peak as the eye moves progressively at different angle. This data will be used to obtain a mathematical model or function that establishes a relationship between the angle of vision (AOV) and the maximum correlation peak (MCP). This model will be tested using different input images from other subject not contained in the initial database, being able to predict angle of vision using the maximum correlation peak data.
Speckle Filtering of GF-3 Polarimetric SAR Data with Joint Restriction Principle.
Xie, Jinwei; Li, Zhenfang; Zhou, Chaowei; Fang, Yuyuan; Zhang, Qingjun
2018-05-12
Polarimetric SAR (PolSAR) scattering characteristics of imagery are always obtained from the second order moments estimation of multi-polarization data, that is, the estimation of covariance or coherency matrices. Due to the extra-paths that signal reflected from separate scatterers within the resolution cell has to travel, speckle noise always exists in SAR images and has a severe impact on the scattering performance, especially on single look complex images. In order to achieve high accuracy in estimating covariance or coherency matrices, three aspects are taken into consideration: (1) the edges and texture of the scene are distinct after speckle filtering; (2) the statistical characteristic should be similar to the object pixel; and (3) the polarimetric scattering signature should be preserved, in addition to speckle reduction. In this paper, a joint restriction principle is proposed to meet the requirement. Three different restriction principles are introduced to the processing of speckle filtering. First, a new template, which is more suitable for the point or line targets, is designed to ensure the morphological consistency. Then, the extent sigma filter is used to restrict the pixels in the template aforementioned to have an identical statistic characteristic. At last, a polarimetric similarity factor is applied to the same pixels above, to guarantee the similar polarimetric features amongst the optional pixels. This processing procedure is named as speckle filtering with joint restriction principle and the approach is applied to GF-3 polarimetric SAR data acquired in San Francisco, CA, USA. Its effectiveness of keeping the image sharpness and preserving the scattering mechanism as well as speckle reduction is validated by the comparison with boxcar filters and refined Lee filter.
Distribution of trace elements in a modified and grain refined aluminium-silicon hypoeutectic alloy.
Faraji, M; Katgerman, L
2010-08-01
The influence of modifier and grain refiner on the nucleation process of a commercial hypoeutectic Al-Si foundry alloy (A356) was investigated using optical microscopy, scanning electron microscopy (SEM) and electron probe microanalysis technique (EPMA). Filtering was used to improve the casting quality; however, it compromised the modification of silicon. Effect of filtering on strontium loss was also studied using the afore-mentioned techniques. EPMA was used to trace the modifying and grain refining agents inside matrix and eutectic Si. This was to help understanding mechanisms of nucleation and modification in this alloy. Using EPMA, the negative interaction of Sr and Al3TiB was closely examined. In modified structure, it was found that the maximum point of Sr concentration was in line with peak of silicon; however, in case of just 0.1wt% added Ti, the peak of Ti concentration was not in line with aluminium, (but it was close to Si peak). Furthermore, EPMA results showed that using filter during casting process lowered the strontium content, although produced a cleaner melt. (c) 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nave, S.E.
Recent advances in fiber optics, diode lasers, CCD detectors, dielectric and holographic optical filters, grating spectrometers, and chemometric data analysis have greatly simplified Raman spectroscopy. In order to make a rugged fiber optic Raman probe for solids/slurries like these at Savannah River, we have designed a probe that eliminates as many optical elements and surfaces as possible. The diffuse reflectance probe tip is modified for Raman scattering by installing thin dielectric in-line filters. Effects of each filter are shown for the NaNO{sub 3} Raman spectrum. By using a diode laser excitation at 780 nm, fluorescence is greatly reduced, and excellentmore » spectra may be obtained from organic solids. At SRS, fiber optic Raman probes are being developed for in situ chemical mapping of radioactive waste storage tanks. Radiation darkening of silica fiber optics is negligible beyond 700 nm. Corrosion resistance is being evaluated. Analysis of process gas (off-gas from SRS processes) is investigated in some detail: hydrogen in nitrogen with NO{sub 2} interference. Other applications and the advantages of the method are pointed out briefly.« less
Mandal, A K; Paramkusam, Bala Ramudu; Sinha, O P
2018-04-01
Though the majority of research on fly ash has proved its worth as a construction material, the utility of bottom ash is yet questionable due to its generation during the pulverized combustion process. The bottom ash produced during the fluidized bed combustion (FBC) process is attracting more attention due to the novelty of coal combustion technology. But, to establish its suitability as construction material, it is necessary to characterize it thoroughly with respect to the geotechnical as well as mineralogical points of view. For fulfilling these objectives, the present study mainly aims at characterizing the FBC bottom ash and its comparison with pulverized coal combustion (PCC) bottom ash, collected from the same origin of coal. Suitability of FBC bottom ash as a dike filter material in contrast to PCC bottom ash in replacing traditional filter material such as sand was also studied. The suitability criteria for utilization of both bottom ash and river sand as filter material on pond ash as a base material were evaluated, and both river sand and FBC bottom ash were found to be satisfactory. The study shows that FBC bottom ash is a better geo-material than PCC bottom ash, and it could be highly recommended as an alternative suitable filter material for constructing ash dikes in place of conventional sand.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
Geng, Zihan; Xie, Yiwei; Zhuang, Leimeng; Burla, Maurizio; Hoekman, Marcel; Roeloffzen, Chris G H; Lowery, Arthur J
2017-10-30
We report a photonic integrated circuit implementation of an optical clock multiplier, or equivalently an optical frequency comb filter. The circuit comprises a novel topology of a ring-resonator-assisted asymmetrical Mach-Zehnder interferometer in a Sagnac loop, providing a reconfigurable comb filter with sub-GHz selectivity and low complexity. A proof-of-concept device is fabricated in a high-index-contrast stoichiometric silicon nitride (Si 3 N 4 /SiO 2 ) waveguide, featuring low loss, small size, and large bandwidth. In the experiment, we show a very narrow passband for filters of this kind, i.e. a -3-dB bandwidth of 0.6 GHz and a -20-dB passband of 1.2 GHz at a frequency interval of 12.5 GHz. As an application example, this particular filter shape enables successful demonstrations of five-fold repetition rate multiplication of optical clock signals, i.e. from 2.5 Gpulses/s to 12.5 Gpulses/s and from 10 Gpulses/s to 50 Gpulses/s. This work addresses comb spectrum processing on an integrated platform, pointing towards a device-compact solution for optical clock multipliers (frequency comb filters) which have diverse applications ranging from photonic-based RF spectrum scanners and photonic radars to GHz-granularity WDM switches and LIDARs.
NASA Astrophysics Data System (ADS)
Kim, Jae Wook
2013-05-01
This paper proposes a novel systematic approach for the parallelization of pentadiagonal compact finite-difference schemes and filters based on domain decomposition. The proposed approach allows a pentadiagonal banded matrix system to be split into quasi-disjoint subsystems by using a linear-algebraic transformation technique. As a result the inversion of pentadiagonal matrices can be implemented within each subdomain in an independent manner subject to a conventional halo-exchange process. The proposed matrix transformation leads to new subdomain boundary (SB) compact schemes and filters that require three halo terms to exchange with neighboring subdomains. The internode communication overhead in the present approach is equivalent to that of standard explicit schemes and filters based on seven-point discretization stencils. The new SB compact schemes and filters demand additional arithmetic operations compared to the original serial ones. However, it is shown that the additional cost becomes sufficiently low by choosing optimal sizes of their discretization stencils. Compared to earlier published results, the proposed SB compact schemes and filters successfully reduce parallelization artifacts arising from subdomain boundaries to a level sufficiently negligible for sophisticated aeroacoustic simulations without degrading parallel efficiency. The overall performance and parallel efficiency of the proposed approach are demonstrated by stringent benchmark tests.
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
NASA Astrophysics Data System (ADS)
Rochette, D.; Clain, S.; André, P.; Bussière, W.; Gentils, F.
2007-05-01
Medium voltage (MV) cells have to respect standards (for example IEC ones (IEC TC 17C 2003 IEC 62271-200 High Voltage Switchgear and Controlgear—Part 200 1st edn)) that define security levels against internal arc faults such as an accidental electrical arc occurring in the apparatus. New protection filters based on porous materials are developed to provide better energy absorption properties and a higher protection level for people. To study the filter behaviour during a major electrical accident, a two-dimensional model is proposed. The main point is the use of a dedicated numerical scheme for a non-conservative hyperbolic problem. We present a numerical simulation of the process during the first 0.2 s when the safety valve bursts and we compare the numerical results with tests carried out in a high power test laboratory on real electrical apparatus.
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Terrain shape estimation from optical flow, using Kalman filtering
NASA Astrophysics Data System (ADS)
Hoff, William A.; Sklair, Cheryl W.
1990-01-01
As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.
Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.
Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R
2013-01-02
The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.
Filtering device. [removing electromagnetic noise from voice communication signals
NASA Technical Reports Server (NTRS)
Edwards, T. R.; Zeanah, H. W. (Inventor)
1976-01-01
An electrical filter for removing noise from a voice communications signal is reported; seven sample values of the signal are obtained continuously, updated and subjected to filtering. Filtering is accomplished by adding balanced, with respect to a mid-point sample, spaced pairs of the sampled values, and then multiplying each pair by a selected filter constant. The signal products thus obtained are summed to provide a filtered version of the original signal.
NASA Astrophysics Data System (ADS)
Kalirai, Jason
2009-07-01
This proposal obtains the photometric zero points in 53 of the 62 UVIS/WFC3 filters: the 18 broad-band filters, 8 medium-band filters, 16 narrow-band filters, and 11 of the 20 quad filters {those being used in cycle 17}. The observations will be primary obtained by observing the hot DA white dwarf standards GD153 and G191-B2B. A redder secondary standard, P330E, will be observed in a subset of the filters to provide color corrections. Repeat observations in 16 of the most widely used cycle 17 filters will be obtained once per month for the first three months, and then once every second month for the duration of cycle 17, alternating and depending on target availability. These observations will enable monitoring of the stability of the photometric system. Photometric transformation equations will be calculated by comparing the photometry of stars in two globular clusters, 47 Tuc and NGC 2419, to previous measurements with other telescopes/instruments.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)
1990-01-01
System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.
VizieR Online Data Catalog: PACS photometry of FIR faint stars (Klaas+, 2018)
NASA Astrophysics Data System (ADS)
Klaas, U.; Balog, Z.; Nielbock, M.; Mueller, T. G.; Linz, H.; Kiss, Cs.
2018-01-01
70, 100 and 160um photometry of FIR faint stars from PACS scan map and chop/nod measurements. For scan maps also the photometry of the combined scan and cross-scan maps (at 160um there are usually two scan and cross-scan maps each as complements to the 70 and 100um maps) is given. Note: Not all stars have measured fluxes in all three filters. Scan maps: The main observing mode was the point-source mini-scan-map mode; selected scan map parameters are given in column mparam. An outline of the data processing using the high-pass filter (HPF) method is presented in Balog et al. (2014ExA....37..129B). Processing proceeded from Herschel Science Archive SPG v13.1.0 level 1 products with HIPE version 15 build 165 for 70 and 100um maps and from Herschel Science Archive SPG v14.2.0 level 1 products with HIPE version 15 build 1480 for 160um maps. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 3.13, 2.76, and 4.12, respectively. The noise for the photometric aperture is calculated as sig_aper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. The final stellar flux is derived as fstar=faper*caper/cc. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. Chop/nod observations: The chop/nod point-source mode is described in this paper. An outline of the data processing is presented in Nielbock et al. (2013ExA....36..631N). Processing proceeded from Herschel Science Archive SPG v11.1.0 level 1 products with HIPE version 13 build 2768. Gyro correction was applied for most of the cases to improve the pointing reconstruction performance. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 6.33, 4.22, and 7.81, respectively. The noise for the photometric aperture is calculated as sigaper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. (7 data files).
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
Code of Federal Regulations, 2011 CFR
2011-07-01
... tissue, filter, non-woven, and paperboard from purchased pulp subcategory. 430.120 Section 430.120... PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.120 Applicability; description of the tissue, filter, non-woven, and...
Code of Federal Regulations, 2014 CFR
2014-07-01
... tissue, filter, non-woven, and paperboard from purchased pulp subcategory. 430.120 Section 430.120... (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.120 Applicability; description of the tissue, filter, non-woven...
Code of Federal Regulations, 2012 CFR
2012-07-01
... tissue, filter, non-woven, and paperboard from purchased pulp subcategory. 430.120 Section 430.120... (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.120 Applicability; description of the tissue, filter, non-woven...
Code of Federal Regulations, 2013 CFR
2013-07-01
... tissue, filter, non-woven, and paperboard from purchased pulp subcategory. 430.120 Section 430.120... (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.120 Applicability; description of the tissue, filter, non-woven...
Code of Federal Regulations, 2010 CFR
2010-07-01
... tissue, filter, non-woven, and paperboard from purchased pulp subcategory. 430.120 Section 430.120... PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.120 Applicability; description of the tissue, filter, non-woven, and...
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Real-time implementation of camera positioning algorithm based on FPGA & SOPC
NASA Astrophysics Data System (ADS)
Yang, Mingcao; Qiu, Yuehong
2014-09-01
In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Artificial neural network (ANN)-based prediction of depth filter loading capacity for filter sizing.
Agarwal, Harshit; Rathore, Anurag S; Hadpe, Sandeep Ramesh; Alva, Solomon J
2016-11-01
This article presents an application of artificial neural network (ANN) modelling towards prediction of depth filter loading capacity for clarification of a monoclonal antibody (mAb) product during commercial manufacturing. The effect of operating parameters on filter loading capacity was evaluated based on the analysis of change in the differential pressure (DP) as a function of time. The proposed ANN model uses inlet stream properties (feed turbidity, feed cell count, feed cell viability), flux, and time to predict the corresponding DP. The ANN contained a single output layer with ten neurons in hidden layer and employed a sigmoidal activation function. This network was trained with 174 training points, 37 validation points, and 37 test points. Further, a pressure cut-off of 1.1 bar was used for sizing the filter area required under each operating condition. The modelling results showed that there was excellent agreement between the predicted and experimental data with a regression coefficient (R 2 ) of 0.98. The developed ANN model was used for performing variable depth filter sizing for different clarification lots. Monte-Carlo simulation was performed to estimate the cost savings by using different filter areas for different clarification lots rather than using the same filter area. A 10% saving in cost of goods was obtained for this operation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1436-1443, 2016. © 2016 American Institute of Chemical Engineers.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
NASA Astrophysics Data System (ADS)
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements
NASA Astrophysics Data System (ADS)
Ulrich, Thomas
2013-08-01
Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2017-06-01
We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.
Spiking Neural Network Decoder for Brain-Machine Interfaces.
Dethier, Julie; Gilja, Vikash; Nuyujukian, Paul; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena
2011-01-01
We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo , a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.
A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection
NASA Astrophysics Data System (ADS)
Tomono, Akira; Iida, Muneo; Kobayashi, Yukio
1990-04-01
This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.
RF tomography of metallic objects in free space: preliminary results
NASA Astrophysics Data System (ADS)
Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher
2015-05-01
RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.
Oxygen transfer in a full-depth biological aerated filter.
Stenstrom, Michael K; Rosso, Diego; Melcer, Henryk; Appleton, Ron; Occiano, Victor; Langworthy, Alan; Wong, Pete
2008-07-01
The City of San Diego, California, evaluated the performance capabilities of biological aerated filters (BAFs) at the Point Loma Wastewater Treatment Plant. The City conducted a 1-year pilot-plant evaluation of BAF technology supplied by two BAF manufacturers. This paper reports on the first independent oxygen-transfer test of BAFs at full depth using the offgas method. The tests showed process-water oxygen-transfer efficiencies of 1.6 to 5.8%/m (0.5 to 1.8%/ft) and 3.9 to 7.9%/m (1.2 to 2.4%/ft) for the two different pilot plants, at their nominal design conditions. Mass balances using chemical oxygen demand and dissolved organic carbon corroborated the transfer rates. Rates are higher than expected from fine-pore diffusers for similar process conditions and depths and clean-water conditions for the same column and are mostly attributed to extended bubble retention time resulting from interactions with the media and biofilm.
Remote sensing of Gulf Stream using GEOS-3 radar altimeter
NASA Technical Reports Server (NTRS)
Leitao, C. D.; Huang, N. E.; Parra, C. G.
1978-01-01
Radar altimeter measurements from the GEOS-3 satellite to the ocean surface indicated the presence of expected geostrophic height differences across the the Gulf Stream. Dynamic sea surface heights were found by both editing and filtering the raw sea surface heights and then referencing these processed data to a 5 minute x 5 minute geoid. Any trend between the processed data and the geoid was removed by subtracting out a linear fit to the residuals in the open ocean. The mean current velocity of 107 + or - 29 cm/sec calculated from the dynamic heights for all orbits corresponded with velocities obtained from hydrographic methods. Also, dynamic topographic maps were produced for August, September, and October 1975. Results pointed out limitations in the accuracy of the geoid, height anomaly deteriorations due to filtering, and lack of dense time and space distribution of measurements.
Space trajectory calculation based on G-sensor
NASA Astrophysics Data System (ADS)
Xu, Biya; Zhan, Yinwei; Shao, Yang
2017-08-01
At present, without full use of the mobile phone around us, most of the research in human body posture recognition field is use camera or portable acceleration sensor to collect data. In this paper, G-sensor built-in mobile phone is use to collect data. After processing data with the way of moving average filter and acceleration integral, joint point's space three-dimensional coordinates can be abtained accurately.
Study regarding the spline interpolation accuracy of the experimentally acquired data
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Danisor, Alin; Tamas, Razvan
2016-12-01
Experimental data processing is an issue that must be solved in almost all the domains of science. In engineering we usually have a large amount of data and we try to extract the useful signal which is relevant for the phenomenon under investigation. The criteria used to consider some points more relevant then some others may take into consideration various conditions which may be either phenomenon dependent, or general. The paper presents some of the ideas and tests regarding the identification of the best set of criteria used to filter the initial set of points in order to extract a subset which best fits the approximated function. If the function has regions where it is either constant, or it has a slow variation, fewer discretization points may be used. This means to create a simpler solution to process the experimental data, keeping the accuracy in some fair good limits.
Bacterial degradation of styrene in waste gases using a peat filter.
Arnold, M; Reittu, A; von Wright, A; Martikainen, P J; Suihko, M L
1997-12-01
A biofiltration process was developed for styrene-containing off-gases using peat as filter material. The average styrene reduction ratio after 190 days of operation was 70% (max. 98%) and the mean styrene elimination capacity was 12 g m-3 h-1 (max. 30 g m-3 h-1). Efficient styrene degradation required addition of nutrients to the peat, adjustment of the pH to a neutral level and efficient control of the humidity. Maintenance of the water balance was easier in a down-flow than in an up-flow process, the former consequently resulting in much better filtration efficiency. The optimum operation temperature was around 23 degrees C, but the styrene removal was still satisfactory at 12 degrees C. Seven different bacterial isolates belonging to the genera Tsukamurella, Pseudomonas, Sphingomonas, Xanthomonas and an unidentified genus in the gamma group of the Proteobacteria isolated from the microflora of active peat filter material were capable of styrene degradation. The isolates differed in their capacity to decompose styrene to carbon dioxide and assimilate it to biomass. No toxic intermediate degradation products of styrene were detected in the filter outlet gas or in growing cultures of isolated bacteria. The use of these isolates in industrial biofilters is beneficial at low styrene concentrations and is safe from both the environmental and public health points of view.
Comparison of sand-based water filters for point-of-use arsenic removal in China.
Smith, Kate; Li, Zhenyu; Chen, Bohan; Liang, Honggang; Zhang, Xinyi; Xu, Ruifei; Li, Zhilin; Dai, Huanfang; Wei, Caijie; Liu, Shuming
2017-02-01
Contamination of groundwater wells by arsenic is a major problem in China. This study compared arsenic removal efficiency of five sand-based point-of-use filters with the aim of selecting the most effective filter for use in a village in Shanxi province, where the main groundwater source had arsenic concentration >200 μg/L. A biosand filter, two arsenic biosand filters, a SONO-style filter and a version of the biosand filter with nails embedded in the sand were tested. The biosand filter with embedded nails was the most consistent and effective under the study conditions, likely due to increased contact time between water and nails and sustained corrosion. Effluent arsenic was below China's standard of 50 μg/L for more than six months after construction. The removal rate averaged 92% and was never below 86%. In comparison, arsenic removal for the nail-free biosand filter was never higher than 53% and declined with time. The arsenic biosand filter, in which nails sit in a diffuser basin above the sand, performed better but effluent arsenic almost always exceeded the standard. This highlights the positive impact on arsenic removal of embedding nails within the top layer of biosand filter sand and the promise of this low-cost filtration method for rural areas affected by arsenic contamination. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inferior Vena Cava Filter Limb Fracture with Embolization to the Right Ventricle.
Jackson, Bradley S; Sepula, Mykel; Marx, Jared T; Cannon, Chad M
2017-08-01
Inferior vena cava (IVC) filter and filter limb embolization is a known phenomenon, with a prevalence of up to 25% for certain filter types. Most commonly, the site of embolization is to the heart. Point-of-care ultrasound is an easily accessible imaging modality that should be utilized when considering IVC filter complications. A 28-year-old woman with a history of metastatic sarcoma and IVC filter placement for deep venous thrombosis presented to the Emergency Department (ED) for chest pain. Chest radiography was reviewed and originally thought to have no abnormalities. Chest computed tomography angiography was negative for filling defects or foreign bodies. A possible foreign body in the heart was noted by a radiologist's over-read of the original chest radiograph. An echocardiogram done by Cardiology was negative for foreign bodies or other abnormalities. Next, an emergency physician performed a bedside echocardiogram, with focused attention to the right side of the heart. An echogenic foreign body was visualized in the right ventricle. The patient was subsequently taken to the cardiac catheterization laboratory, where fluoroscopic visualization of a limb wire of an IVC filter within the right ventricle was obtained. That foreign body was subsequently removed successfully, along with removal of the broken IVC filter. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: This case report highlights the utility of point-of-care ultrasound in the work-up of a patient with an embolized IVC filter wire. Chest pain patients frequently receive point-of-care echocardiography in the ED, and these ultrasound findings should be recognized and used to guide further treatment and consultation. Copyright © 2017 Elsevier Inc. All rights reserved.
A floating-point digital receiver for MRI.
Hoenninger, John C; Crooks, Lawrence E; Arakawa, Mitsuaki
2002-07-01
A magnetic resonance imaging (MRI) system requires the highest possible signal fidelity and stability for clinical applications. Quadrature analog receivers have problems with channel matching, dc offset and analog-to-digital linearity. Fixed-point digital receivers (DRs) reduce all of these problems. We have demonstrated that a floating-point DR using large (order 124 to 512) FIR low-pass filters also overcomes these problems, automatically provides long word length and has low latency between signals. A preloaded table of finite impuls response (FIR) filter coefficients provides fast switching between one of 129 different one-stage and two-stage multrate FIR low-pass filters with bandwidths between 4 KHz and 125 KHz. This design has been implemented on a dual channel circuit board for a commercial MRI system.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas C.; Wang, Huamin; French, Richard
2014-08-14
Hot-vapor filtered bio-oils were produced from two different biomass feedstocks, oak and switchgrass, and the oils were evaluated in hydroprocessing tests for production of liquid hydrocarbon products. Hot-vapor filtering reduced bio-oil yields and increased gas yields. The yields of fuel carbon as bio-oil were reduced by ten percentage points by hot-vapor filtering for both feedstocks. The unfiltered bio-oils were evaluated alongside the filtered bio-oils using a fixed bed catalytic hydrotreating test. These tests showed good processing results using a two-stage catalytic hydroprocessing strategy. Equal-sized catalyst beds, a sulfided Ru on carbon catalyst bed operated at 220°C and a sulfided CoMomore » on alumina catalyst bed operated at 400°C were used with the entire reactor at 100 atm operating pressure. The products from the four tests were similar. The light oil phase product was fully hydrotreated so that nitrogen and sulfur were below the level of detection, while the residual oxygen ranged from 0.3 to 2.0%. The density of the products varied from 0.80 g/ml up to 0.86 g/ml over the period of the test with a correlated change of the hydrogen to carbon atomic ratio from 1.79 down to 1.57, suggesting some loss of catalyst activity through the test. These tests provided the data needed to assess the suite of liquid fuel products from the process and the activity of the catalyst in relationship to the existing catalyst lifetime barrier for the technology.« less
Correlation of Spatially Filtered Dynamic Speckles in Distance Measurement Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, Dmitry V.; Nippolainen, Ervin; Kamshilin, Alexei A.
2008-04-15
In this paper statistical properties of spatially filtered dynamic speckles are considered. This phenomenon was not sufficiently studied yet while spatial filtering is an important instrument for speckles velocity measurements. In case of spatial filtering speckle velocity information is derived from the modulation frequency of filtered light power which is measured by photodetector. Typical photodetector output is represented by a narrow-band random noise signal which includes non-informative intervals. Therefore more or less precious frequency measurement requires averaging. In its turn averaging implies uncorrelated samples. However, conducting research we found that correlation is typical property not only of dynamic speckle patternsmore » but also of spatially filtered speckles. Using spatial filtering the correlation is observed as a response of measurements provided to the same part of the object surface or in case of simultaneously using several adjacent photodetectors. Found correlations can not be explained using just properties of unfiltered dynamic speckles. As we demonstrate the subject of this paper is important not only from pure theoretical point but also from the point of applied speckle metrology. E.g. using single spatial filter and an array of photodetector can greatly improve accuracy of speckle velocity measurements.« less
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Technical Note: Kinect V2 surface filtering during gantry motion for radiotherapy applications.
Nazir, Souha; Rihana, Sandy; Visvikis, Dimitris; Fayad, Hadi
2018-04-01
In radiotherapy, the Kinect V2 camera, has recently received a lot of attention concerning many clinical applications including patient positioning, respiratory motion tracking, and collision detection during the radiotherapy delivery phase. However, issues associated with such applications are related to some materials and surfaces reflections generating an offset in depth measurements especially during gantry motion. This phenomenon appears in particular when the collimator surface is observed by the camera; resulting in erroneous depth measurements, not only in Kinect surfaces itself, but also as a large peak when extracting a 1D respiratory signal from these data. In this paper, we proposed filtering techniques to reduce the noise effect in the Kinect-based 1D respiratory signal, using a trend removal filter, and in associated 2D surfaces, using a temporal median filter. Filtering process was validated using a phantom, in order to simulate a patient undergoing radiotherapy treatment while having the ground truth. Our results indicate a better correlation between the reference respiratory signal and its corresponding filtered signal (Correlation coefficient of 0.76) than that of the nonfiltered signal (Correlation coefficient of 0.13). Furthermore, surface filtering results show a decrease in the mean square distance error (85%) between the reference and the measured point clouds. This work shows a significant noise compensation and surface restitution after surface filtering and therefore a potential use of the Kinect V2 camera for different radiotherapy-based applications, such as respiratory tracking and collision detection. © 2018 American Association of Physicists in Medicine.
Virus removal in ceramic depth filters based on diatomaceous earth.
Michen, Benjamin; Meder, Fabian; Rust, Annette; Fritsch, Johannes; Aneziris, Christos; Graule, Thomas
2012-01-17
Ceramic filter candles, based on the natural material diatomaceous earth, are widely used to purify water at the point-of-use. Although such depth filters are known to improve drinking water quality by removing human pathogenic protozoa and bacteria, their removal regarding viruses has rarely been investigated. These filters have relatively large pore diameters compared to the physical dimension of viruses. However, viruses may be retained by adsorption mechanisms due to intermolecular and surface forces. Here, we use three types of bacteriophages to investigate their removal during filtration and batch experiments conducted at different pH values and ionic strengths. Theoretical models based on DLVO-theory are applied in order to verify experimental results and assess surface forces involved in the adsorptive process. This was done by calculation of interaction energies between the filter surface and the viruses. For two small spherically shaped viruses (MS2 and PhiX174), these filters showed no significant removal. In the case of phage PhiX174, where attractive interactions were expected, due to electrostatic attraction of oppositely charged surfaces, only little adsorption was reported in the presence of divalent ions. Thus, we postulate the existence of an additional repulsive force between PhiX174 and the filter surface. It is hypothesized that such an additional energy barrier originates from either the phage's specific knobs that protrude from the viral capsid, enabling steric interactions, or hydration forces between the two hydrophilic interfaces of virus and filter. However, a larger-sized, tailed bacteriophage of the family Siphoviridae was removed by log 2 to 3, which is explained by postulating hydrophobic interactions.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Saturated Monoglyceride Polymorphism and Gel Formation of Biodiesel Blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chupka, Gina; Fouts, Lisa; McCormick, Robert
Crystallization or gel formation of normal paraffins in diesel fuel under cold weather conditions leading to fuel filter clogging is a common problem. Cold weather operability of biodiesel (B100) and blends with diesel fuel presents additional complexity because of the presence of saturated monoglycerides (SMGs) and other relatively polar species. Currently, the cloud point measurement (a measure of when the first component crystallizes out of solution) is used to define the lowest temperature at which the fuel can be used without causing cold weather issues. While filter plugging issues have declined, there still remain intermittent unexpected problems above the cloudmore » point for biodiesel blends. Development of a fundamental understanding of how minor components in biodiesel crystallize, gel, and transform is needed in order to prevent these unexpected issues. We have found that SMGs, a low level impurity present in B100 from the production process, can crystallize out of solution and undergo a solvent-mediated polymorphic phase transformation to a more stable, less soluble form. This causes them to persist at temperatures above the cloud point once they have some out of solution. Additionally, we have found that SMGs can cause other more soluble, lower melting point minor components in the B100 to co-crystallize and come out of solution. Monoolein, another minor component from the production process is an unsaturated monoglyceride with a much lower melting point and higher solubility than SMGs. It is able to form a co-crystal with the SMGs and is found together with the SMGs on plugged filters we have analyzed in our laboratory. An observation of isolated crystals in the lab led us to believe that the SMGs may also be forming a gel-like network with components of the B100 and diesel fuel. During filtration experiments, we have noted that in some cases a solid layer of crystals forms and blocks the filter completely, while in other cases this does not occur. Because SMGs are polar and can form layered networks once a sufficient amount of crystals have come out of solution, we recently began investigating the ability of SMGs to form a gel network with fuel components as well as with other minor polar components in the fuel in order to obtain a fundamental understanding of the mechanism of formation. It has been well established that this type of phenomena occurs in sub-sea pipelines where a chief crystallizing component begins to crystallize out of solution. Once a sufficient amount of crystals exists, a volume spanning network of solid crystals can trap liquid crude oil and form a solid-like gel network. We are investigating whether this type of phenomena can occur with SMGs and both fatty acid methyl esters from the B100 and normal paraffins from diesel fuel. Additionally, SMGs are well known to incorporate water into their layered crystal structure. Water is often used to stabilize less stable polymorphic forms of SMGs, therefore water was another minor component of interest. Also of interest is glycerin which has been found on clogged filters in our laboratory.« less
Schulz, Maria; Gerber, Alexander; Groneberg, David A
2016-04-16
Environmental tobacco smoke (ETS) is associated with human morbidity and mortality, particularly chronic obstructive pulmonary disease (COPD and lung cancer. Although direct DNA-damage is a leading pathomechanism in active smokers, passive smoking is enough to induce bronchial asthma, especially in children. Particulate matter (PM) demonstrably plays an important role in this ETS-associated human morbidity, constituting a surrogate parameter for ETS exposure. Using an Automatic Environmental Tobacco Smoke Emitter (AETSE) and an in-house developed, non-standard smoking regime, we tried to imitate the smoking process of human smokers to demonstrate the significance of passive smoking. Mean concentration (C(mean)) and area under the curve (AUC) of particulate matter (PM2.5) emitted by 3R4F reference cigarettes and the popular filter-tipped and non-filter brand cigarettes "Roth-Händle" were measured and compared. The cigarettes were not conditioned prior to smoking. The measurements were tested for Gaussian distribution and significant differences. C(mean) PM2.5 of the 3R4F reference cigarette: 3911 µg/m³; of the filter-tipped Roth-Händle: 3831 µg/m³; and of the non-filter Roth-Händle: 2053 µg/m³. AUC PM2.5 of the 3R4F reference cigarette: 1,647,006 µg/m³·s; of the filter-tipped Roth-Händle: 1,608,000 µg/m³·s; and of the non-filter Roth-Händle: 858,891 µg/m³·s. The filter-tipped cigarettes (the 3R4F reference cigarette and filter-tipped Roth-Händle) emitted significantly more PM2.5 than the non-filter Roth-Händle. Considering the harmful potential of PM, our findings note that the filter-tipped cigarettes are not a less harmful alternative for passive smokers. Tobacco taxation should be reconsidered and non-smoking legislation enforced.
Visualization of flow during cleaning process on a liquid nanofibrous filter
NASA Astrophysics Data System (ADS)
Bílek, P.
2017-10-01
This paper deals with visualization of flow during cleaning process on a nanofibrous filter. Cleaning of a filter is very important part of the filtration process which extends lifetime of the filter and improve filtration properties. Cleaning is carried out on flat-sheet filters, where particles are deposited on the filter surface and form a filtration cake. The cleaning process dislodges the deposited filtration cake, which is loose from the membrane surface to the retentate flow. The blocked pores in the filter are opened again and hydrodynamic properties are restored. The presented optical method enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. The local concentration of solid particles is possible to estimate and achieve new information about the cleaning process. In the article is described the cleaning process on nanofibrous membranes for waste water treatment. The hydrodynamic data were compared to the images of the cleaning process.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
NASA Astrophysics Data System (ADS)
Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando
2017-08-01
Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.
NASA Astrophysics Data System (ADS)
Chockalingam, Letchumanan
2005-01-01
The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.
Hepa filter dissolution process
Brewer, Ken N.; Murphy, James A.
1994-01-01
A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.
Nriagu, Jerome; Xi, Chuanwu; Siddique, Azhar; Vincent, Annette; Shomar, Basem
2018-05-29
Deteriorating water quality from aging infrastructure, growing threat of pollution from industrialization and urbanization, and increasing awareness about waterborne diseases are among the factors driving the surge in worldwide use of point-of-entry (POE) and point-of-use (POU) filters. Any adverse influence of such consumer point-of-use systems on quality of water at the tap remains poorly understood, however. We determined the chemical and microbiological changes in municipal water from the point of entry into the household plumbing system until it leaves from the tap in houses equipped with filters. We show that POE/POU devices can induce significant deterioration of the quality of tap water by functioning as traps and reservoirs for sludge, scale, rust, algae or slime deposits which promote microbial growth and biofilm formation in the household water distribution system. With changes in water pressure and physical or chemical disturbance of the plumbing system, the microorganisms and contaminants may be flushed into the tap water. Such changes in quality of household water carry a potential health risk which calls for some introspection in widespread deployment of POE/POU filters in water distribution systems.
Optical restoration of images blurred by atmospheric turbulence using optimum filter theory.
Horner, J L
1970-01-01
The results of optimum filtering from communications theory have been applied to an image restoration problem. Photographic film imagery, degraded by long-term artificial atmospheric turbulence, has been restored by spatial filters placed in the Fourier transform plane. The time-averaged point spread function was measured and used in designing the filters. Both the simple inverse filter and the optimum least-mean-square filters were used in the restoration experiments. The superiority of the latter is conclusively demonstrated. An optical analog processor was used for the restoration.
Using bench scale U removal capacity data with bone char, a preliminary point-of-use filter was developed using theoretical calculations. The design specifications were completed for the filter, and the manufacturing of the preliminary filter is currently underway. Through ...
Monitoring urban subsidence based on SAR lnterferometric point target analysis
Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.
2009-01-01
lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.
NASA Astrophysics Data System (ADS)
Suri, Veenu; Meyer, Michael; Greenbaum, Alexandra Z.; Bell, Cameron; Beichman, Charles; Gordon, Karl D.; Greene, Thomas P.; Hodapp, K.; Horner, Scott; Johnstone, Doug; Leisenring, Jarron; Manara, Carlos; Mann, Rita; Misselt, K.; Raileanu, Roberta; Rieke, Marcia; Roellig, Thomas
2018-01-01
We describe observations of the embedded young cluster associated with the HII region NGC 2024 planned as part of the guaranteed time observing program for the James Webb Space Telescope with the NIRCam (Near Infrared Camera) instrument. Our goal is to obtain a census of the cluster down to 2 Jupiter masses, viewed through 10-20 magnitudes of extinction, using multi-band filter photometry, both broadband filters and intermediate band filters that are expected to be sensitive to temperature and surface gravity. The cluster contains several bright point sources as well as extended emission due to reflected light, thermal emission from warm dust, as well as nebular line emission. We first developed techniques to better understand which point sources would saturate in our target fields when viewed through several JWST NIRCam filters. Using images of the field with the WISE satellite in filters W1 and W2, as well as 2MASS (J and H) bands, we devised an algorithm that takes the K-band magnitudes of point sources in the field, and the known saturation limits of several NIRCam filters to estimate the impact of the extended emission on survey sensitivity. We provide an overview of our anticipated results, detecting the low mass end of the IMF as well as planetary mass objects likely liberated through dynamical interactions.
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
NASA Astrophysics Data System (ADS)
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
HEPA filter dissolution process
Brewer, K.N.; Murphy, J.A.
1994-02-22
A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Writing filter processes for the SAGA editor, appendix G
NASA Technical Reports Server (NTRS)
Kirslis, Peter A.
1985-01-01
The SAGA editor provides a mechanism by which separate processes can be invoked during an editing session to traverse portions of the parse tree being edited. These processes, termed filter processes, read, analyze, and possibly transform the parse tree, returning the result to the editor. By defining new commands with the editor's user defined command facility, which invoke filter processes, authors of filter can provide complex operations as simple commands. A tree plotter, pretty printer, and Pascal tree transformation program were already written using this facility. The filter processes are introduced, parse tree structure is described and the library interface made available to the programmer. Also discussed is how to compile and run filter processes. Examples are presented to illustrate aspect of each of these areas.
Lunar laser ranging data processing in a Unix/X windows environment
NASA Technical Reports Server (NTRS)
Ricklefs, Randall L.; Ries, Judit G.
1993-01-01
In cooperation with the NASA Crustal Dynamics Project initiative placing workstation computers at each of its laser ranging stations to handle data filtering and normalpointing, MLRS personnel have developed a new generation of software to provide the same services for the lunar laser ranging data type. The Unix operating system and X windows/Motif provides an environment for both batch and interactive filtering and normalpointing as well as prediction calculations. The goal is to provide a transportable and maintainable data reduction environment. This software and some sample displays are presented. that the lunar (or satellite) datacould be processed on one computer while data was taken on the other. The reduction of the data was totally interactive and in no way automated. In addition, lunar predictions were produced on-site, another first in the effort to down-size historically mainframe-based applications. Extraction of earth rotation parameters was at one time attempted on site in near-realtime. In 1988, the Crustal Dynamics Project SLR Computer Panel mandated the installation of Hewlett-Packard 9000/360 Unix workstations at each NASA-operated laser ranging station to relieve the aging controller computers of much of their data and communications handling responsibility and to provide on-site data filtering and normal pointing for a growing list of artificial satellite targets. This was seen by MLRS staff as an opportunity to provide a better lunar data processing environment as well.
Lunar laser ranging data processing in a Unix/X windows environment
NASA Astrophysics Data System (ADS)
Ricklefs, Randall L.; Ries, Judit G.
1993-06-01
In cooperation with the NASA Crustal Dynamics Project initiative placing workstation computers at each of its laser ranging stations to handle data filtering and normalpointing, MLRS personnel have developed a new generation of software to provide the same services for the lunar laser ranging data type. The Unix operating system and X windows/Motif provides an environment for both batch and interactive filtering and normalpointing as well as prediction calculations. The goal is to provide a transportable and maintainable data reduction environment. This software and some sample displays are presented. that the lunar (or satellite) datacould be processed on one computer while data was taken on the other. The reduction of the data was totally interactive and in no way automated. In addition, lunar predictions were produced on-site, another first in the effort to down-size historically mainframe-based applications. Extraction of earth rotation parameters was at one time attempted on site in near-realtime. In 1988, the Crustal Dynamics Project SLR Computer Panel mandated the installation of Hewlett-Packard 9000/360 Unix workstations at each NASA-operated laser ranging station to relieve the aging controller computers of much of their data and communications handling responsibility and to provide on-site data filtering and normal pointing for a growing list of artificial satellite targets. This was seen by MLRS staff as an opportunity to provide a better lunar data processing environment as well.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.
1986-01-01
Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.
Wavelength dependence in radio-wave scattering and specular-point theory
NASA Technical Reports Server (NTRS)
Tyler, G. L.
1976-01-01
Radio-wave scattering from natural surfaces contains a strong quasispecular component that at fixed wavelengths is consistent with specular-point theory, but often has a strong wavelength dependence that is not predicted by physical optics calculations under the usual limitations of specular-point models. Wavelength dependence can be introduced by a physical approximation that preserves the specular-point assumptions with respect to the radii of curvature of a fictitious, effective scattering surface obtained by smoothing the actual surface. A uniform low-pass filter model of the scattering process yields explicit results for the effective surface roughness versus wavelength. Interpretation of experimental results from planetary surfaces indicates that the asymptotic surface height spectral densities fall at least as fast as an inverse cube of spatial frequency. Asymptotic spectral densities for Mars and portions of the lunar surface evidently decrease more rapidly.
Adaptive marginal median filter for colour images.
Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor
2011-01-01
This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.
Bacterial treatment effectiveness of point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Summers, R Scott
2009-08-01
Laboratory experiments were conducted on six point-of-use (POU) ceramic water filters that were manufactured in Nicaragua; two filters were used by families for ca. 4 years and the other filters had limited prior use in our lab. Water spiked with ca. 10(6)CFU/mL of Escherichia coli was dosed to the filters. Initial disinfection efficiencies ranged from 3 - 4.5 log, but the treatment efficiency decreased with subsequent batches of spiked water. Silver concentrations in the effluent water ranged from 0.04 - 1.75 ppb. Subsequent experiments that utilized feed water without a bacterial spike yielded 10(3)-10(5)CFU/mL bacteria in the effluent. Immediately after recoating four of the filters with a colloidal silver solution, the effluent silver concentrations increased to 36 - 45 ppb and bacterial disinfection efficiencies were 3.8-4.5 log. The treatment effectiveness decreased to 0.2 - 2.5 log after loading multiple batches of highly contaminated water. In subsequent loading of clean water, the effluent water contained <20-41 CFU/mL in two of the filters. This indicates that the silver had some benefit to reducing bacterial contamination by the filter. In general these POU filters were found to be effective, but showed loss of effectiveness with time and indicated a release of microbes into subsequent volumes of water passed through the system.
NASA Astrophysics Data System (ADS)
Dettmer, J.; Quijano, J. E.; Dosso, S. E.; Holland, C. W.; Mandolesi, E.
2016-12-01
Geophysical seabed properties are important for the detection and classification of unexploded ordnance. However, current surveying methods such as vertical seismic profiling, coring, or inversion are of limited use when surveying large areas with high spatial sampling density. We consider surveys based on a source and receiver array towed by an autonomous vehicle which produce large volumes of seabed reflectivity data that contain unprecedented and detailed seabed information. The data are analyzed with a particle filter, which requires efficient reflection-coefficient computation, efficient inversion algorithms and efficient use of computer resources. The filter quantifies information content of multiple sequential data sets by considering results from previous data along the survey track to inform the importance sampling at the current point. Challenges arise from environmental changes along the track where the number of sediment layers and their properties change. This is addressed by a trans-dimensional model in the filter which allows layering complexity to change along a track. Efficiency is improved by likelihood tempering of various particle subsets and including exchange moves (parallel tempering). The filter is implemented on a hybrid computer that combines central processing units (CPUs) and graphics processing units (GPUs) to exploit three levels of parallelism: (1) fine-grained parallel computation of spherical reflection coefficients with a GPU implementation of Levin integration; (2) updating particles by concurrent CPU processes which exchange information using automatic load balancing (coarse grained parallelism); (3) overlapping CPU-GPU communication (a major bottleneck) with GPU computation by staggering CPU access to the multiple GPUs. The algorithm is applied to spherical reflection coefficients for data sets along a 14-km track on the Malta Plateau, Mediterranean Sea. We demonstrate substantial efficiency gains over previous methods. [This research was supported in part by the U.S. Dept of Defense, thought the Strategic Environmental Research and Development Program (SERDP).
Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M
2017-11-25
Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
Evaluating the sustainability of ceramic filters for point-of-use drinking water treatment.
Ren, Dianjun; Colosi, Lisa M; Smith, James A
2013-10-01
This study evaluates the social, economic, and environmental sustainability of ceramic filters impregnated with silver nanoparticles for point-of-use (POU) drinking water treatment in developing countries. The functional unit for this analysis was the amount of water consumed by a typical household over ten years (37,960 L), as delivered by either the POU technology or a centralized water treatment and distribution system. Results indicate that the ceramic filters are 3-6 times more cost-effective than the centralized water system for reduction of waterborne diarrheal illness among the general population and children under five. The ceramic filters also exhibit better environmental performance for four of five evaluated life cycle impacts: energy use, water use, global warming potential, and particulate matter emissions (PM10). For smog formation potential, the centralized system is preferable to the ceramic filter POU technology. This convergence of social, economic, and environmental criteria offers clear indication that the ceramic filter POU technology is a more sustainable choice for drinking water treatment in developing countries than the centralized treatment systems that have been widely adopted in industrialized countries.
Matching rendered and real world images by digital image processing
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
3D road marking reconstruction from street-level calibrated stereo pairs
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Effect of high latitude filtering on NWP skill
NASA Technical Reports Server (NTRS)
Kalnay, E.; Takacs, L. L.; Hoffman, R. N.
1984-01-01
The high latitude filtering techniques commonly employed in global grid point models to eliminate the high frequency waves associated with the convergence of meridians, can introduce serious distortions which ultimately affect the solution at all latitudes. Experiments completed so far with the 4 deg x 5 deg, 9-level GLAS Fourth Order Model indicate that the high latitude filter currently in operation affects only minimally its forecasting skill. In one case, however, the use of pressure gradient filter significantly improved the forecast. Three day forecasts with the pressure gradient and operational filters are compared as are 5-day forecasts with no filter.
Comparison of Nonlinear Filtering Techniques for Lunar Surface Roving Navigation
NASA Technical Reports Server (NTRS)
Kimber, Lemon; Welch, Bryan W.
2008-01-01
Leading up to the Apollo missions the Extended Kalman Filter, a modified version of the Kalman Filter, was developed to estimate the state of a nonlinear system. Throughout the Apollo missions, Potter's Square Root Filter was used for lunar navigation. Now that NASA is returning to the Moon, the filters used during the Apollo missions must be compared to the filters that have been developed since that time, the Bierman-Thornton Filter (UD) and the Unscented Kalman Filter (UKF). The UD Filter involves factoring the covariance matrix into UDUT and has similar accuracy to the Square Root Filter; however it requires less computation time. Conversely, the UKF, which uses sigma points, is much more computationally intensive than any of the filters; however it produces the most accurate results. The Extended Kalman Filter, Potter's Square Root Filter, the Bierman-Thornton UD Filter, and the Unscented Kalman Filter each prove to be the most accurate filter depending on the specific conditions of the navigation system.
Citeau, M; Olivier, J; Mahmoud, A; Vaxelaire, J; Larue, O; Vorobiev, E
2012-09-15
Pressurised electro-osmotic dewatering (PEOD) of two sewage sludges (activated and anaerobically digested) was studied under constant electric current (C.C.) and constant voltage (C.V.) with a laboratory chamber simulating closely an industrial filter. The influence of sludge characteristics, process parameters, and electrode/filter cloth position was investigated. The next parameters were tested: 40 and 80 A/m², 20, 30, and 50 V-for digested sludge dewatering; and 20, 40 and 80 A/m², 20, 30, and 50 V-for activated sludge dewatering. Effects of filter cloth electric resistance and initial cake thickness were also investigated. The application of PEOD provides a gain of 12 points of dry solids content for the digested sludge (47.0% w/w) and for the activated sludge (31.7% w/w). In PEOD processed at C.C. or at C.V., the dewatering flow rate was similar for the same electric field intensity. In C.C. mode, both the electric resistance of cake and voltage increase, causing a temperature rise by ohmic effect. In C.V. mode, a current intensity peak was observed in the earlier dewatering period. Applying at first a constant current and later on a constant voltage, permitted to have better control of ohmic heating effect. The dewatering rate was not significantly affected by the presence of filter cloth on electrodes, but the use of a thin filter cloth reduced remarkably the energy consumption compared to a thicker one: 69% of reduction energy input at 45% w/w of dry solids content. The reduction of the initial cake thickness is advantageous to increase the final dry solids content. Copyright © 2012 Elsevier Ltd. All rights reserved.
Online tracking of instantaneous frequency and amplitude of dynamical system response
NASA Astrophysics Data System (ADS)
Frank Pai, P.
2010-05-01
This paper presents a sliding-window tracking (SWT) method for accurate tracking of the instantaneous frequency and amplitude of arbitrary dynamic response by processing only three (or more) most recent data points. Teager-Kaiser algorithm (TKA) is a well-known four-point method for online tracking of frequency and amplitude. Because finite difference is used in TKA, its accuracy is easily destroyed by measurement and/or signal-processing noise. Moreover, because TKA assumes the processed signal to be a pure harmonic, any moving average in the signal can destroy the accuracy of TKA. On the other hand, because SWT uses a constant and a pair of windowed regular harmonics to fit the data and estimate the instantaneous frequency and amplitude, the influence of any moving average is eliminated. Moreover, noise filtering is an implicit capability of SWT when more than three data points are used, and this capability increases with the number of processed data points. To compare the accuracy of SWT and TKA, Hilbert-Huang transform is used to extract accurate time-varying frequencies and amplitudes by processing the whole data set without assuming the signal to be harmonic. Frequency and amplitude trackings of different amplitude- and frequency-modulated signals, vibrato in music, and nonlinear stationary and non-stationary dynamic signals are studied. Results show that SWT is more accurate, robust, and versatile than TKA for online tracking of frequency and amplitude.
Trickling Filters. Student Manual. Biological Treatment Process Control.
ERIC Educational Resources Information Center
Richwine, Reynold D.
The textual material for a unit on trickling filters is presented in this student manual. Topic areas discussed include: (1) trickling filter process components (preliminary treatment, media, underdrain system, distribution system, ventilation, and secondary clarifier); (2) operational modes (standard rate filters, high rate filters, roughing…
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter... where filter and non-woven papers are produced from purchased pulp] Pollutant or pollutant property Kg...
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter... where filter and non-woven papers are produced from purchased pulp] Pollutant or pollutant property Kg...
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter... where filter and non-woven papers are produced from purchased pulp] Pollutant or pollutant property Kg...
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Micro Coronal Bright Points Observed in the Quiet Magnetic Network by SOHO/EIT
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Porter, J. G.
1997-01-01
When one looks at SOHO/EIT Fe XII images of quiet regions, one can see the conventional coronal bright points (> 10 arcsec in diameter), but one will also notice many smaller faint enhancements in brightness (Figure 1). Do these micro coronal bright points belong to the same family as the conventional bright points? To investigate this question we compared SOHO/EIT Fe XII images with Kitt Peak magnetograms to determine whether the micro bright points are in the magnetic network and mark magnetic bipoles within the network. To identify the coronal bright points, we applied a picture frame filter to the Fe XII images; this brings out the Fe XII network and bright points (Figure 2) and allows us to study the bright points down to the resolution limit of the SOHO/EIT instrument. This picture frame filter is a square smoothing function (hlargelyalf a network cell wide) with a central square (quarter of a network cell wide) removed so that a bright point's intensity does not effect its own background. This smoothing function is applied to the full disk image. Then we divide the original image by the smoothed image to obtain our filtered image. A bright point is defined as any contiguous set of pixels (including diagonally) which have enhancements of 30% or more above the background; a micro bright point is any bright point 16 pixels or smaller in size. We then analyzed the bright points that were fully within quiet regions (0.6 x 0.6 solar radius) centered on disk center on six different days.
NASA Astrophysics Data System (ADS)
Eltner, A.; Schneider, D.; Maas, H.-G.
2016-06-01
Soil erosion is a decisive earth surface process strongly influencing the fertility of arable land. Several options exist to detect soil erosion at the scale of large field plots (here 600 m²), which comprise different advantages and disadvantages depending on the applied method. In this study, the benefits of unmanned aerial vehicle (UAV) photogrammetry and terrestrial laser scanning (TLS) are exploited to quantify soil surface changes. Beforehand data combination, TLS data is co-registered to the DEMs generated with UAV photogrammetry. TLS data is used to detect global as well as local errors in the DEMs calculated from UAV images. Additionally, TLS data is considered for vegetation filtering. Complimentary, DEMs from UAV photogrammetry are utilised to detect systematic TLS errors and to further filter TLS point clouds in regard to unfavourable scan geometry (i.e. incidence angle and footprint) on gentle hillslopes. In addition, surface roughness is integrated as an important parameter to evaluate TLS point reliability because of the increasing footprints and thus area of signal reflection with increasing distance to the scanning device. The developed fusion tool allows for the estimation of reliable data points from each data source, considering the data acquisition geometry and surface properties, to finally merge both data sets into a single soil surface model. Data fusion is performed for three different field campaigns at a Mediterranean field plot. Successive DEM evaluation reveals continuous decrease of soil surface roughness, reappearance of former wheel tracks and local soil particle relocation patterns.
Research on a high-precision calibration method for tunable lasers
NASA Astrophysics Data System (ADS)
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Time-domain damping models in structural acoustics using digital filtering
NASA Astrophysics Data System (ADS)
Parret-Fréaud, Augustin; Cotté, Benjamin; Chaigne, Antoine
2016-02-01
This paper describes a new approach in order to formulate well-posed time-domain damping models able to represent various frequency domain profiles of damping properties. The novelty of this approach is to represent the behavior law of a given material directly in a discrete-time framework as a digital filter, which is synthesized for each material from a discrete set of frequency-domain data such as complex modulus through an optimization process. A key point is the addition of specific constraints to this process in order to guarantee stability, causality and verification of thermodynamics second law when transposing the resulting discrete-time behavior law into the time domain. Thus, this method offers a framework which is particularly suitable for time-domain simulations in structural dynamics and acoustics for a wide range of materials (polymers, wood, foam, etc.), allowing to control and even reduce the distortion effects induced by time-discretization schemes on the frequency response of continuous-time behavior laws.
Stochastic stability of sigma-point Unscented Predictive Filter.
Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong
2015-07-01
In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
Folmsbee, Martha
2015-01-01
Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.
All-fiber bandpass filter based on asymmetrical modes exciting and coupling
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhu, Tao; Shi, Leilei; Liu, Min
2013-01-01
A low cost all-fiber bandpass filter is demonstrated by fabricating an asymmetric long-period fiber grating (LPFG) in an off-set splicing fiber structure of two single mode fibers in this paper. The main principle of the filter is that the asymmetric LPFG written by single-side CO2 laser irradiation is used to couple the asymmetric cladding modes excited by the offset-coupling of the splicing point between the single mode fiber and the grating, and the left core mode of the splicing point cannot be coupled to the right fiber core, hence the interference effect is avoided. So the bandpass characteristics in the transmission spectrum are achieved. The designed filter exhibits a pass band at a central wavelength of 1565.0 nm with a full-width at half-maximum bandwidth of 12.3 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narducci, Dario, E-mail: dario.narducci@unimib.it; Consorzio DeltaTi Research; Selezneva, Ekaterina
2012-09-15
Energy filtering has been widely considered as a suitable tool to increase the thermoelectric performances of several classes of materials. In its essence, energy filtering provides a way to increase the Seebeck coefficient by introducing a strongly energy-dependent scattering mechanism. Under certain conditions, however, potential barriers may lead to carrier localization, that may also affect the thermoelectric properties of a material. A model is proposed, actually showing that randomly distributed potential barriers (as those found, e.g., in polycrystalline films) may lead to the simultaneous occurrence of energy filtering and carrier localization. Localization is shown to cause a decrease of themore » actual carrier density that, along with the quantum tunneling of carriers, may result in an unexpected increase of the power factor with the doping level. The model is corroborated toward experimental data gathered by several authors on degenerate polycrystalline silicon and lead telluride. - Graphical abstract: In heavily doped semiconductors potential barriers may lead to both carrier energy filtering and localization. This may lead to an enhancement of the thermoelectric properties of the material, resulting in an unexpected increase of the power factor with the doping level. Highlights: Black-Right-Pointing-Pointer Potential barriers are shown to lead to carrier localization in thermoelectric materials. Black-Right-Pointing-Pointer Evidence is put forward of the formation of a mobility edge. Black-Right-Pointing-Pointer Energy filtering and localization may explain the enhancement of power factor in degenerate semiconductors.« less
NASA Astrophysics Data System (ADS)
Nex, F.; Gerke, M.
2014-08-01
Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.
Method of treating contaminated HEPA filter media in pulp process
Hu, Jian S.; Argyle, Mark D.; Demmer, Ricky L.; Mondok, Emilio P.
2003-07-29
A method for reducing contamination of HEPA filters with radioactive and/or hazardous materials is described. The method includes pre-processing of the filter for removing loose particles. Next, the filter medium is removed from the housing, and the housing is decontaminated. Finally, the filter medium is processed as pulp for removing contaminated particles by physical and/or chemical methods, including gravity, flotation, and dissolution of the particles. The decontaminated filter medium is then disposed of as non-RCRA waste; the particles are collected, stabilized, and disposed of according to well known methods of handling such materials; and the liquid medium in which the pulp was processed is recycled.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Acceptance and Impact of Point-of-Use Water Filtration Systems in Rural Guatemala.
Larson, Kim L; Hansen, Corrie; Ritz, Michala; Carreño, Diego
2017-01-01
Infants and children in developing countries bear the burden of diarrheal disease. Diarrheal disease is linked to unsafe drinking water and can result in serious long-term consequences, such as impaired immune function and brain growth. There is evidence that point-of-use water filtration systems reduce the prevalence of diarrhea in developing countries. In the summer of 2014, following community forums and interactive workshops, water filters were distributed to 71 households in a rural Maya community in Guatemala. The purpose of this study was to evaluate the uptake of tabletop water filtration systems to reduce diarrheal diseases. A descriptive correlational study was used that employed community partnership and empowerment strategies. One year postintervention, in the summer of 2015, a bilingual, interdisciplinary research team conducted a house-to-house survey with families who received water filters. Survey data were gathered from the head of household on family demographics, current family health, water filter usage, and type of flooring in the home. Interviews were conducted in Spanish and in partnership with a village leader. Each family received a food package of household staples for their participation. Descriptive statistics were calculated for all responses. Fisher's exact test and odds ratios were used to determine relationships between variables. Seventy-nine percent (n = 56) of the 71 households that received a water filter in 2014 participated in the study. The majority of families (71.4%; n = 40) were using the water filters and 16 families (28.6%) had broken water filters. Of the families with working water filters, 15% reported diarrhea, while 31% of families with a broken water filter reported diarrhea. Only 55.4% of the homes had concrete flooring. More households with dirt flooring and broken water filters reported a current case of diarrhea. A record review of attendees at an outreach clinic in this village noted a decrease in intestinal infections from 2014 (53%) to 2015 (32%). A trend suggests that water filter usage was both practically and clinically significant in reducing the incidence of diarrheal disease in this sample. Some homes did not have flat surfaces for water filter storage. Housing conditions should be taken into consideration for future diarrheal disease prevention initiatives. Point-of-use water filters using a community-university partnership can reduce diarrheal disease in rural regions of Guatemala. © 2016 Sigma Theta Tau International.
Lunar laser ranging data identification and management
NASA Technical Reports Server (NTRS)
1979-01-01
Activity under the subject grant during the first half of fiscal year 1979 at the University of Texas at Austin is reported. Raw lunar laser ranging data submitted by McDonald Observatory, Fort Davis, Texas and by the Australian Division of National Mapping at Orroral Valley, Australia were processed. This processing includes the filtering of signal events from noise photons, normal point formation, data archive management, and data distribution. System-wide program maintenance and up-grade carried out wherever and whenever necessary. Lunar laser ranging data is being transmitted from Austin to Paris for the extraction of earth rotation information during the EROLD campaign.
1976-01-01
Trickling Filter Fairchild A.F.B. Trickling Filter Town of Medical Lake Lagoon Town of Fairfield Lagoon Town of Millwood Activated Sludge (Extended Aeration...sewer system is subject to high levels of in- filtration. The treatment plant has ice problems in winter, trickling filter spreading arm clogging...lagoons. There is need of a routine effluent quan- tity/quality monitoring program. Tekoa. The trickling filter plant is poorly maintained to the point
Tarumi, Toshiyasu; Small, Gary W; Combs, Roger J; Kroutil, Robert T
2004-04-01
Finite impulse response (FIR) filters and finite impulse response matrix (FIRM) filters are evaluated for use in the detection of volatile organic compounds with wide spectral bands by direct analysis of interferogram data obtained from passive Fourier transform infrared (FT-IR) measurements. Short segments of filtered interferogram points are classified by support vector machines (SVMs) to implement the automated detection of heated plumes of the target analyte, ethanol. The interferograms employed in this study were acquired with a downward-looking passive FT-IR spectrometer mounted on a fixed-wing aircraft. Classifiers are trained with data collected on the ground and subsequently used for the airborne detection. The success of the automated detection depends on the effective removal of background contributions from the interferogram segments. Removing the background signature is complicated when the analyte spectral bands are broad because there is significant overlap between the interferogram representations of the analyte and background. Methods to implement the FIR and FIRM filters while excluding background contributions are explored in this work. When properly optimized, both filtering procedures provide satisfactory classification results for the airborne data. Missed detection rates of 8% or smaller for ethanol and false positive rates of at most 0.8% are realized. The optimization of filter design parameters, the starting interferogram point for filtering, and the length of the interferogram segments used in the pattern recognition is discussed.
40 CFR 430.127 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non.... Subpart L [PSNS for non-integrated mills where filter and non-woven papers are produced from purchased...
40 CFR 430.127 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non.... Subpart L [PSNS for non-integrated mills where filter and non-woven papers are produced from purchased...
A superior edge preserving filter with a systematic analysis
NASA Technical Reports Server (NTRS)
Holladay, Kenneth W.; Rickman, Doug
1991-01-01
A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.
Selected annotated bibliographies for adaptive filtering of digital image data
Mayers, Margaret; Wood, Lynnette
1988-01-01
Digital spatial filtering is an important tool both for enhancing the information content of satellite image data and for implementing cosmetic effects which make the imagery more interpretable and appealing to the eye. Spatial filtering is a context-dependent operation that alters the gray level of a pixel by computing a weighted average formed from the gray level values of other pixels in the immediate vicinity.Traditional spatial filtering involves passing a particular filter or set of filters over an entire image. This assumes that the filter parameter values are appropriate for the entire image, which in turn is based on the assumption that the statistics of the image are constant over the image. However, the statistics of an image may vary widely over the image, requiring an adaptive or "smart" filter whose parameters change as a function of the local statistical properties of the image. Then a pixel would be averaged only with more typical members of the same population. This annotated bibliography cites some of the work done in the area of adaptive filtering. The methods usually fall into two categories, (a) those that segment the image into subregions, each assumed to have stationary statistics, and use a different filter on each subregion, and (b) those that use a two-dimensional "sliding window" to continuously estimate the filter either the spatial or frequency domain, or may utilize both domains. They may be used to deal with images degraded by space variant noise, to suppress undesirable local radiometric statistics while enforcing desirable (user-defined) statistics, to treat problems where space-variant point spread functions are involved, to segment images into regions of constant value for classification, or to "tune" images in order to remove (nonstationary) variations in illumination, noise, contrast, shadows, or haze.Since adpative filtering, like nonadaptive filtering, is used in image processing to accomplish various goals, this bibliography is organized in subsections based on application areas. Contrast enhancement, edge enhancement, noise suppression, and smoothing are typically performed in order imaging process, (for example, degradations due to the optics and electronics of the sensor, or to blurring caused by the intervening atmosphere, uniform motion, or defocused optics). Some of the papers listed may apply to more than one of the above categories; when this happens the paper is listed under the category for which the paper's emphasis is greatest. A list of survey articles is also supplied. These articles are general discussions on adaptive filters and reviews of work done. Finally, a short list of miscellaneous articles are listed which were felt to be sufficiently important to be included, but do not fit into any of the above categories. This bibliography, listing items published from 1970 through 1987, is extensive, but by no means complete. It is intended as a guide for scientists and image analysts, listing references for background information as well as areas of significant development in adaptive filtering.
Recent Flight Results of the TRMM Kalman Filter
NASA Technical Reports Server (NTRS)
Andrews, Stephen F.; Bilanow, Stephen; Bauer, Frank (Technical Monitor)
2002-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls the roll and pitch attitude based on the Earth Sensor Assembly (ESA) output. TRMM's nominal orbit altitude was 350 km, until raised to 402 km to prolong mission life. During the boost, the ESA experienced a decreasing signal to noise ratio, until sun interference at 393 km altitude made the ESA data unreliable for attitude determination. At that point, the backup attitude determination algorithm, an extended Kalman filter, was enabled. After the boost finished, TRMM reacquired its nadir-pointing attitude, and continued its mission. This paper will briefly discuss the boost and the decision to turn on the backup attitude determination algorithm. A description of the extended Kalman filter algorithm will be given. In addition, flight results from analyzing attitude data and the results of software changes made onboard TRMM will be discussed. Some lessons learned are presented.
Elliott, M A; Stauber, C E; Koksal, F; DiGiano, F A; Sobsey, M D
2008-05-01
Point-of-use (POU) drinking water treatment technology enables those without access to safe water sources to improve the quality of their water by treating it in the home. One of the most promising emerging POU technologies is the biosand filter (BSF), a household-scale, intermittently operated slow sand filter. Over 500,000 people in developing countries currently use the filters to treat their drinking water. However, despite this successful implementation, there has been almost no systematic, process engineering research to substantiate the effectiveness of the BSF or to optimize its design and operation. The major objectives of this research were to: (1) gain an understanding of the hydraulic flow condition within the filter (2) characterize the ability of the BSF to reduce the concentration of enteric bacteria and viruses in water and (3) gain insight into the key parameters of filter operation and their effects on filter performance. Three 6-8 week microbial challenge experiments are reported herein in which local surface water was seeded with E. coli, echovirus type 12 and bacteriophages (MS2 and PRD-1) and charged to the filter daily. Tracer tests indicate that the BSF operated at hydraulic conditions closely resembling plug flow. The performance of the filter in reducing microbial concentrations was highly dependent upon (1) filter ripening over weeks of operation and (2) the daily volume charged to the filter. BSF performance was best when less than one pore volume (18.3-L in the filter design studied) was charged to the filter per day and this has important implications for filter design and operation. Enhanced filter performance due to ripening was generally observed after roughly 30 days. Reductions of E. coli B ranged from 0.3 log10 (50%) to 4 log10, with geometric mean reductions after at least 30 days of operation of 1.9 log10. Echovirus 12 reductions were comparable to those for E. coli B with a range of 1 log10 to >3 log10 and mean reductions after 30 days of 2.1 log10. Bacteriophage reductions were much lower, ranging from zero to 1.3 log10 (95%) with mean reductions of only 0.5 log10 (70%). These data indicate that virus reduction by BSF may differ substantially depending upon the specific viral agent.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Passaiacquaa, Paola; Belmont, Patrick; Staley, Dennis M.; Simley, Jeffery; Arrowsmith, J. Ramon; Bode, Collin A.; Crosby, Christopher; DeLong, Stephen; Glenn, Nancy; Kelly, Sara; Lague, Dimitri; Sangireddy, Harish; Schaffrath, Keelin; Tarboton, David; Wasklewicz, Thad; Wheaton, Joseph
2015-01-01
The study of mass and energy transfer across landscapes has recently evolved to comprehensive considerations acknowledging the role of biota and humans as geomorphic agents, as well as the importance of small-scale landscape features. A contributing and supporting factor to this evolution is the emergence over the last two decades of technologies able to acquire high resolution topography (HRT) (meter and sub-meter resolution) data. Landscape features can now be captured at an appropriately fine spatial resolution at which surface processes operate; this has revolutionized the way we study Earth-surface processes. The wealth of information contained in HRT also presents considerable challenges. For example, selection of the most appropriate type of HRT data for a given application is not trivial. No definitive approach exists for identifying and filtering erroneous or unwanted data, yet inappropriate filtering can create artifacts or eliminate/distort critical features. Estimates of errors and uncertainty are often poorly defined and typically fail to represent the spatial heterogeneity of the dataset, which may introduce bias or error for many analyses. For ease of use, gridded products are typically preferred rather than the more information-rich point cloud representations. Thus many users take advantage of only a fraction of the available data, which has furthermore been subjected to a series of operations often not known or investigated by the user. Lastly, standard HRT analysis work-flows are yet to be established for many popular HRT operations, which has contributed to the limited use of point cloud data.In this review, we identify key research questions relevant to the Earth-surface processes community within the theme of mass and energy transfer across landscapes and offer guidance on how to identify the most appropriate topographic data type for the analysis of interest. We describe the operations commonly performed from raw data to raster products and we identify key considerations and suggest appropriate work-flows for each, pointing to useful resources and available tools. Future research directions should stimulate further development of tools that take advantage of the wealth of information contained in the HRT data and address the present and upcoming research needs such as the ability to filter out unwanted data, compute spatially variable estimates of uncertainty and perform multi-scale analyses. While we focus primarily on HRT applications for mass and energy transfer, we envision this review to be relevant beyond the Earth-surface processes community for a much broader range of applications involving the analysis of HRT.
The discrete prolate spheroidal filter as a digital signal processing tool
NASA Technical Reports Server (NTRS)
Mathews, J. D.; Breakall, J. K.; Karawas, G. K.
1983-01-01
The discrete prolate spheriodall (DPS) filter is one of the glass of nonrecursive finite impulse response (FIR) filters. The DPS filter is superior to other filters in this class in that it has maximum energy concentration in the frequency passband and minimum ringing in the time domain. A mathematical development of the DPS filter properties is given, along with information required to construct the filter. The properties of this filter were compared with those of the more commonly used filters of the same class. Use of the DPS filter allows for particularly meaningful statements of data time/frequency resolution cell values. The filter forms an especially useful tool for digital signal processing.
Filter Function for Wavefront Sensing Over a Field of View
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.
Control, Filtering and Prediction for Phased Arrays in Directed Energy Systems
2016-04-30
adaptive optics. 15. SUBJECT TERMS control, filtering, prediction, system identification, adaptive optics, laser beam pointing, target tracking, phase... laser beam control; furthermore, wavefront sensors are plagued by the difficulty of maintaining the required alignment and focusing in dynamic mission...developed new methods for filtering, prediction and system identification in adaptive optics for high energy laser systems including phased arrays. The
Characteristics of spectro-temporal modulation frequency selectivity in humans.
Oetjen, Arne; Verhey, Jesko L
2017-03-01
There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.
Pols, David H.J.; Bramer, Wichor M.; Bindels, Patrick J.E.; van de Laar, Floris A.; Bohnen, Arthur M.
2015-01-01
Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. PMID:26195683
[Micropore filters for measuring red blood cell deformability and their pore diameters].
Niu, X; Yan, Z
2001-09-01
Micropore filters are the most important components in micropore filtration testes for assessing red blood cell (RBC) deformability. With regard to their appearance and filtration behaviors, comparisons are made for different kinds of filters currently in use. Nickel filters with regular geometric characteristics are found to be more sensitive to the effects of physical, chemical, especially pathological factors on the RBC deformability. We have critically reviewed the following viewpoint that filters with 3 microns pore diameter are more sensitive to cell volume than to internal viscosity while filters with 5 microns pore diameter are just the opposite. After analyzing the experiment results with 3 microns and 5 microns filters, we point out that filters with smaller pore diameters are more suitable for assessing the RBC deformability.
NASA Astrophysics Data System (ADS)
Zinke, Stephan
2017-02-01
Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.
A Comparative Study of Different Deblurring Methods Using Filters
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Kavitha, S.
2011-12-01
This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.
Albers, Christian Nyrop; Ellegaard-Jensen, Lea; Hansen, Lars Hestbjerg; Sørensen, Sebastian R
2018-02-01
Ammonium oxidation to nitrite and then to nitrate (nitrification) is a key process in many waterworks treating groundwater to make it potable. In rapid sand filters, nitrifying microbial communities may evolve naturally from groundwater bacteria entering the filters. However, in new filters this may take several months, and in some cases the nitrification process is never sufficiently rapid to be efficient or is only performed partially, with nitrite as an undesired end product. The present study reports the first successful priming of nitrification in a rapid sand filter treating groundwater. It is shown that nitrifying communities could be enriched by microbiomes from well-functioning rapid sand filters in waterworks and that the enriched nitrifying consortium could be used to inoculate fresh filters, significantly shortening the time taken for the nitrification process to start. The key nitrifiers in the enrichment were different from those in the well-functioning filter, but similar to those that initiated the nitrification process in fresh filters without inoculation. Whether or not the nitrification was primed with the enriched nitrifying consortium, the bacteria performing the nitrification process during start-up appeared to be slowly outcompeted by Nitrospira, the dominant nitrifying bacterium in well-functioning rapid sand filters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E
2016-04-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.
Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.
2016-01-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666
Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition
NASA Astrophysics Data System (ADS)
Meckley, John R.
1995-09-01
The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.
Optimizing the Distribution of Tie Points for the Bundle Adjustment of HRSC Image Mosaics
NASA Astrophysics Data System (ADS)
Bostelmann, J.; Breitkopf, U.; Heipke, C.
2017-07-01
For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.
Intrinsic properties of cupric oxide nanoparticles enable effective filtration of arsenic from water
McDonald, Kyle J.; Reynolds, Brandon; Reddy, K. J.
2015-01-01
The contamination of arsenic in human drinking water supplies is a serious global health concern. Despite multiple years of research, sustainable arsenic treatment technologies have yet to be developed. This study demonstrates the intrinsic abilities of cupric oxide nanoparticles (CuO-NP) towards arsenic adsorption and the development of a point-of-use filter for field application. X-ray diffraction and X-ray photoelectron spectroscopy experiments were used to examine adsorption, desorption, and readsorption of aqueous arsenite and arsenate by CuO-NP. Field experiments were conducted with a point-of-use filter, coupled with real-time arsenic monitoring, to remove arsenic from domestic groundwater samples. The CuO-NP were regenerated by desorbing arsenate via increasing pH above the zero point of charge. Results suggest an effective oxidation of arsenite to arsenate on the surface of CuO-NP. Naturally occurring arsenic was effectively removed by both as-prepared and regenerated CuO-NP in a field demonstration of the point-of-use filter. A sustainable arsenic mitigation model for contaminated water is proposed. PMID:26047164
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
NASA Astrophysics Data System (ADS)
Xiao, Ze-xin; Chen, Kuan
2008-03-01
Biochemical analyzer is one of the important instruments in the clinical diagnosis, and its optical system is the important component. The operation of this optical system can be regard as three parts. The first is transforms the duplicate colored light as the monochromatic light. The second is transforms the light signal of the monochromatic, which have the information of the measured sample, as the electric signal by use the photoelectric detector. And the last is to send the signal to data processing system by use the control system. Generally, there are three types monochromators: prism, optical grating and narrow-band pass filter. Thereinto, the narrow-band pass filter were widely used in the semi-auto biochemical analyzer. Through analysed the principle of biochemical analyzer base on the narrow-band pass filter, we known that the optical has three features. The first is the optical path of the optical system is a non- imaging system. The second, this system is wide spectrum region that contain visible light and ultraviolet spectrum. The third, this is a little aperture and little field monochromatic light system. Therefore, design idea of this optical system is: (1) luminous energy in the system less transmission loss; (2) detector coupled to the luminous energy efficient; mainly correct spherical aberration. Practice showed the point of Image quality evaluation: (1) dispersion circle diameter equal the receiving device pixel effective width of 125%, and the energy distribution should point target of 80% of energy into the receiving device pixel width of the effective diameter in this dispersion circle; (2) With MTF evaluation, the requirements in 20lp/ mm spatial frequency, the MTF values should not be lower than 0.6. The optical system should be fit in with ultraviolet and visible light width spectrum, and the detector image plane can but suited the majority visible light spectrum when by defocus optimization, and the image plane of violet and ultraviolet excursion quite large. Traditional biochemical analyzer optical design not fully consider this point, the authors introduce a effective image plane compensation measure innovatively, it greatly increased the reception efficiency of the violet and ultraviolet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
Design and development of an ultrasound calibration phantom and system
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.
2014-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.
NASA Astrophysics Data System (ADS)
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
Modernisation of the Narod fluxgate electronics at Budkov Geomagnetic Observatory
NASA Astrophysics Data System (ADS)
Vlk, Michal
2013-04-01
From the signal point of view, fluxgate unit is a low-frequency parametric up-convertor where the output signal is picked up in bands near second harmonic of the pump frequency fp (sometimes called idler for historic reasons) and purity of idler is augmented by orthogonal construction of the pump and pick-up coil. In our concept, the pump source uses Heegner quartz oscillator near 8 MHz, synchronous divider to 16 kHz (fp) and switched current booster. Rectangular pulse is used for feeding the original ferroresonant pump source with neutralizing transformer in the case of symmetric shielded cabling. Input transformer has split primary winding for using symmetrical shielded input cabling and secondary winding tuned by polystyrol capacitor and loaded by inverting integrator bridged by capacitor. This structure behaves like resistor cooled to low temperature. Next stage is bandpass filter (derivator) with a gain tuned to 2 fp with leaky FDNRs followed by current booster. Another part of the system is low-noise peak elimination and bias circuit. Heart of the system is a 120-V precision source which uses 3.3-V Zener diode chain - thermistor bridge in the feedback. Peak elimination circuit logics consists of the envelope detector, comparators, asynchronous counter in hardwired logics, set of weighted resistor chains and discrete MOS switches in current-mode. All HV components use airy montage to prevent the ground-leak. After 200 m long coaxial line, the signal is galvanically separated by transformer and fed into A/D converter, which is ordinary HD audio (96 kHz) soundcard. Real sample rate is constructed by a-posteriori data processing when statistic properties of the incoming sample are known. The sampled signal is band-pass filtered with a 200-Hz filter centered at 2 fp. The signal is then fed through a first-order allpass centered at 2 fp. The result approximates Hilbert transform sufficiently good for detecting the envelope via square sum-root rule. The signal is further decimated via IIR filters to sample-rate 187.5 Hz. Raw instrument data files are saved hourly in floating-point binary files and are marked by time stamps obtained from NTP server. A-posteriory processing of (plesiochronous) instrument data consists of downsampling by IIRs to 12 Hz, irrational (time-mark driven) upsampling to 13 Hz and then using the INTERMAGNET standard FIR filter (5 sec to 1 min) to obtain 1-min data. Because the range of the signal processing system is about 60 nT (range of the peak elimination circuit is 3.8 uT), the resulting magnetograms look like the La Cour ones.
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
Lantagne, Daniele; Klarman, Molly; Mayer, Ally; Preston, Kelsey; Napotnik, Julie; Jellison, Kristen
2010-06-01
Diarrhoeal diseases cause an estimated 1.87 million child deaths per year. Point-of-use filtration using locally made ceramic filters improves microbiological quality of stored drinking water and prevents diarrhoeal disease. Scaling-up ceramic filtration is inhibited by lack of universal quality control standards. We investigated filter production variables to determine their affect on microbiological removal during 5-6 weeks of simulated normal use. Decreases in the clay:sawdust ratio and changes in the burnable decreased effectiveness of the filter. Method of silver application and shape of filter did not impact filter effectiveness. A maximum flow rate of 1.7 l(-hr) was established as a potential quality control measure for one particular filter to ensure 99% (2- log(10)) removal of total coliforms. Further research is indicated to determine additional production variables associated with filter effectiveness and develop standardized filter production procedures prior to scaling-up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacks, Robert; Stormo, Julie; Rose, Coralie
Data have demonstrated that filter media lose tensile strength and the ability to resist the effects of moisture as a function of age. Testing of new and aged filters needs to be conducted to correlate reduction of physical strength of HEPA media to the ability of filters to withstand upset conditions. Appendix C of the Nuclear Air Cleaning Handbook provides the basis for DOE’s HEPA filter service life guidance. However, this appendix also points out the variability of data, and it does not correlate performance of aged filters to degradation of media due to age. Funding awarded by NSR&D tomore » initiate full-scale testing of aged HEPA filters addresses the issue of correlating media degradation due to age with testing of new and aged HEPA filters under a generic design basis event set of conditions. This funding has accelerated the process of describing this study via: (1) establishment of a Technical Working Group of all stakeholders, (2) development and approval of a test plan, (3) development of testing and autopsy procedures, (4) acquiring an initial set of aged filters, (5) testing the initial set of aged filters, and (6) developing the filter test report content for each filter tested. This funding was very timely and has moved the project forward by at least three years. Activities have been correlated with testing conducted under DOE-EM funding for evaluating performance envelopes for AG-1 Section FC Separator and Separatorless filters. This coordination allows correlation of results from the NSR&D Aged Filter Study with results from testing new filters of the Separator and Separatorless Filter Study. DOE-EM efforts have identified approximately 100 more filters of various ages that have been stored under Level B conditions. NSR&D funded work allows a time for rigorous review among subject matter experts before moving forward with development of the testing matrix that will be used for additional filters. The NSR&D data sets are extremely valuable in as much as establishing a selfimproving, NQA-1 program capable of advancing the service lifetime study of HEPA filters. The data and reports are available for careful and critical review by subject matter experts before the next set of filters is tested and can be found in the appendices of this final report. NSR&D funds have not only initiated the Aged HEPA Filter Study alluded to in Appendix C of the NACH, but have also enhanced the technical integrity and effectiveness of all of the follow-on testing for this long-term study.« less
Symmetric Phase Only Filtering for Improved DPIV Data Processing
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
2006-01-01
The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."
Baron, Julianne L; Peters, Tammy; Shafer, Raymond; MacMurray, Brian; Stout, Janet E
2014-11-01
Opportunistic waterborne pathogens (eg, Legionella, Pseudomonas) may persist in water distribution systems despite municipal chlorination and secondary disinfection and can cause health care-acquired infections. Point-of-use (POU) filtration can limit exposure to pathogens; however, their short maximum lifetime and membrane clogging have limited their use. A new faucet filter rated at 62 days was evaluated at a cancer center in Northwestern Pennsylvania. Five sinks were equipped with filters, and 5 sinks served as controls. Hot water was collected weekly for 17 weeks and cultured for Legionella, Pseudomonas, and total bacteria. Legionella was removed from all filtered samples for 12 weeks. One colony was recovered from 1 site at 13 weeks; however, subsequent tests were negative through 17 weeks of testing. Total bacteria were excluded for the first 2 weeks, followed by an average of 1.86 log reduction in total bacteria compared with controls. No Pseudomonas was recovered from filtered or control faucets. This next generation faucet filter eliminated Legionella beyond the 62 day manufacturers' recommended maximum duration of use. These new POU filters will require fewer change-outs than standard filters and could be a cost-effective method for preventing exposure to Legionella and other opportunistic waterborne pathogens in hospitals with high-risk patients. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Io's Sodium Cloud (Clear and Green-Yellow Filters)
NASA Technical Reports Server (NTRS)
1997-01-01
The green-yellow filter and clear filter images of Io which were released over the past two days were originally exposed on the same frame. The camera pointed in slightly different directions for the two exposures, placing a clear filter image of Io on the top half of the frame, and a green-yellow filter image of Io on the bottom half of the frame. This picture shows that entire original frame in false color, the most intense emission appearing white.
East is to the right. Most of Io's visible surface is in shadow, though one can see part of an illuminated crescent on its western side. The burst of white light near Io's eastern equatorial edge (most distinctive in the green filter image) is sunlight scattered by the plume of the volcano Prometheus.There is much more bright light near Io in the clear filter image, since that filter's wider wavelength range admits more scattered light from Prometheus' sunlit plume and Io's illuminated crescent. Thus in the clear filter image especially, Prometheus's plume was bright enough to produce several white spikes which extend radially outward from the center of the plume emission. These spikes are artifacts produced by the optics of the camera. Two of the spikes in the clear filter image appear against Io's shadowed surface, and the lower of these is pointing towards a bright round spot. That spot corresponds to thermal emission from the volcano Pele.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Automated feature detection and identification in digital point-ordered signals
Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.
1998-01-01
A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Engineering applications of metaheuristics: an introduction
NASA Astrophysics Data System (ADS)
Oliva, Diego; Hinojosa, Salvador; Demeshko, M. V.
2017-01-01
Metaheuristic algorithms are important tools that in recent years have been used extensively in several fields. In engineering, there is a big amount of problems that can be solved from an optimization point of view. This paper is an introduction of how metaheuristics can be used to solve complex problems of engineering. Their use produces accurate results in problems that are computationally expensive. Experimental results support the performance obtained by the selected algorithms in such specific problems as digital filter design, image processing and solar cells design.
Identification of a Class of Filtered Poisson Processes.
1981-01-01
LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 141.703 - Sampling locations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the analysis of the sample. (c) Systems that recycle filter backwash water must collect source water samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... 141.703 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery
NASA Astrophysics Data System (ADS)
Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.
2016-06-01
Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.
? filtering for stochastic systems driven by Poisson processes
NASA Astrophysics Data System (ADS)
Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya
2015-01-01
This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.
Lee, Yonghun; Kim, Dong-Min; Li, Zhenglin; Kim, Dong-Eun; Kim, Sung-Jin
2018-03-13
We demonstrate a microfiltration chip that separates blood plasma by using water-head-driven pulsatile pressures rather than any external equipment and use it for on-chip amplification of nucleic acids. The chip generates pulsatile pressures to significantly reduce filter clogging without hemolysis, and consists of an oscillator, a plasma-extraction pump, and filter units. The oscillator autonomously converts constant water-head pressure to pulsatile pressure, and the pump uses the pulsatile pressure to extract plasma through the filter. Because the pulsatile pressure can periodically clear blood cells from the filter surface, filter clogging can be effectively reduced. In this way, we achieve plasma extraction with 100% purity and 90% plasma recovery at 15% hematocrit. During a 10 min period, the volume of plasma extracted was 43 μL out of a 243 μL extraction volume at 15% hematocrit. We also studied the influence of the pore size and diameter of the filter, blood loading volume, oscillation period, and hematocrit level on the filtration performance. To demonstrate the utility of our chip for point-of-care testing (POCT) applications, we successfully implemented on-chip amplification of a nucleic acid (miDNA21) in plasma filtered from blood. We expect our chip to be useful not only for POCT applications but also for other bench-top analysis tools using blood plasma.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
The use of filter media to determine filter cleanliness
NASA Astrophysics Data System (ADS)
Van Staden, S. J.; Haarhoff, J.
It is general believed that a sand filter starts its life with new, perfectly clean media, which becomes gradually clogged with each filtration cycle, eventually getting to a point where either head loss or filtrate quality starts to deteriorate. At this point the backwash cycle is initiated and, through the combined action of air and water, returns the media to its original perfectly clean state. Reality, however, dictates otherwise. Many treatment plants visited a decade or more after commissioning are found to have unacceptably dirty filter sand and backwash systems incapable of returning the filter media to a desired state of cleanliness. In some cases, these problems are common ones encountered in filtration plants but many reasons for media deterioration remain elusive, falling outside of these common problems. The South African conditions of highly eutrophic surface waters at high temperatures, however, exacerbate the problems with dirty filter media. Such conditions often lead to the formation of biofilm in the filter media, which is shown to inhibit the effective backwashing of sand and carbon filters. A systematic investigation into filter media cleanliness was therefore started in 2002, ending in 2005, at the University of Johannesburg (the then Rand Afrikaans University). This involved media from eight South African Water Treatment Plants, varying between sand and sand-anthracite combinations and raw water types from eutrophic through turbid to low-turbidity waters. Five states of cleanliness and four fractions of specific deposit were identified relating to in situ washing, column washing, cylinder inversion and acid-immersion techniques. These were measured and the results compared to acceptable limits for specific deposit, as determined in previous studies, though expressed in kg/m 3. These values were used to determine the state of the filters. In order to gain greater insight into the composition of the specific deposits stripped from the media, a four-point characterisation step was introduced for the resultant suspensions based on acid-solubility and volatility. Results showed that a reasonably effective backwash removed a median specific deposit of 0.89 kg/m 3. Further washing in a laboratory column removed a median specific deposit of 1.34 kg/m 3. Media subjected to a standardised cylinder inversion procedure removed a median specific deposit of 2.41 kg/m 3. Immersion in a strong acid removed a median specific deposit of 35.2 kg/m 3. The four-point characterisation step showed that the soluble-volatile fraction was consistently small in relation to the other fractions. The organic fraction was quite high at the RG treatment plant and the soluble-non-volatile fraction was particularly high at the BK treatment plant.
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur
2018-05-09
Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.
Tracking of Ball and Players in Beach Volleyball Videos
Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern
2014-01-01
This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936
Adaptive filtering in biological signal processing.
Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A
1990-01-01
The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.
Algorithms used in the Airborne Lidar Processing System (ALPS)
Nagle, David B.; Wright, C. Wayne
2016-05-23
The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.
Automatic detection of zebra crossings from mobile LiDAR data
NASA Astrophysics Data System (ADS)
Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.
2015-07-01
An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.
Ballesteros, Rocío
2017-01-01
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. PMID:28946606
Ribeiro-Gomes, Krishna; Hernández-López, David; Ortega, José F; Ballesteros, Rocío; Poblete, Tomás; Moreno, Miguel A
2017-09-23
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.
Czarnecki, Damian; Poppe, Björn; Zink, Klemens
2017-06-01
The impact of removing the flattening filter in clinical electron accelerators on the relationship between dosimetric quantities such as beam quality specifiers and the mean photon and electron energies of the photon radiation field was investigated by Monte Carlo simulations. The purpose of this work was to determine the uncertainties when using the well-known beam quality specifiers or energy-based beam specifiers as predictors of dosimetric photon field properties when removing the flattening filter. Monte Carlo simulations applying eight different linear accelerator head models with and without flattening filter were performed in order to generate realistic radiation sources and calculate field properties such as restricted mass collision stopping power ratios (L¯/ρ)airwater, mean photon and secondary electron energies. To study the impact of removing the flattening filter on the beam quality correction factors k Q , this factor for detailed ionization chamber models was calculated by Monte Carlo simulations. Stopping power ratios (L¯/ρ)airwater and k Q values for different ionization chambers as a function of TPR1020 and %dd(10) x were calculated. Moreover, mean photon energies in air and at the point of measurement in water as well as mean secondary electron energies at the point of measurement were calculated. The results revealed that removing the flattening filter led to a change within 0.3% in the relationship between %dd(10) x and (L¯/ρ)airwater, whereby the relationship between TPR1020 and (L¯/ρ)airwater changed up to 0.8% for high energy photon beams. However, TPR1020 was a good predictor of (L¯/ρ)airwater for both types of linear accelerator with energies < 10 MeV with a maximal deviation between both types of accelerators of 0.23%. According to the results, the mean photon energy below the linear accelerators head as well as at the point of measurement may not be suitable as a predictor of (L¯/ρ)airwater and k Q to merge the dosimetry of both linear accelerator types. It was possible to derive (L¯/ρ)airwater using the mean secondary electron energy at the point of measurement as a predictor with an accuracy of 0.17%. A bias between k Q for linear accelerators with and without flattening filter within 1.1% and 1.6% was observed for TPR1020 and %dd(10) x respectively. The results of this study have shown that removing the flattening filter led to a change in the relationship between the well-known beam quality specifiers and dosimetric quantities at the point of measurement, namely (L¯/ρ)airwater, mean photon and electron energy. Furthermore, the results show that a beam profile correction is important for dose measurements with large ionization chambers in flattening filter free beams. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Beyene, F.; Knospe, S.; Busch, W.
2015-04-01
Landslide detection and monitoring remain difficult with conventional differential radar interferometry (DInSAR) because most pixels of radar interferograms around landslides are affected by different error sources. These are mainly related to the nature of high radar viewing angles and related spatial distortions (such as overlays and shadows), temporal decorrelations owing to vegetation cover, and speed and direction of target sliding masses. On the other hand, GIS can be used to integrate spatial datasets obtained from many sources (including radar and non-radar sources). In this paper, a GRID data model is proposed to integrate deformation data derived from DInSAR processing with other radar origin data (coherence, layover and shadow, slope and aspect, local incidence angle) and external datasets collected from field study of landslide sites and other sources (geology, geomorphology, hydrology). After coordinate transformation and merging of data, candidate landslide representing pixels of high quality radar signals were filtered out by applying a GIS based multicriteria filtering analysis (GIS-MCFA), which excludes grid points in areas of shadow and overlay, low coherence, non-detectable and non-landslide deformations, and other possible sources of errors from the DInSAR data processing. At the end, the results obtained from GIS-MCFA have been verified by using the external datasets (existing landslide sites collected from fieldworks, geological and geomorphologic maps, rainfall data etc.).
Automated Processing of Two-Dimensional Correlation Spectra
Sengstschmid; Sterk; Freeman
1998-04-01
An automated scheme is described which locates the centers of cross peaks in two-dimensional correlation spectra, even under conditions of severe overlap. Double-quantum-filtered correlation (DQ-COSY) spectra have been investigated, but the method is also applicable to TOCSY and NOESY spectra. The search criterion is the intrinsic symmetry (or antisymmetry) of cross-peak multiplets. An initial global search provides the preliminary information to build up a two-dimensional "chemical shift grid." All genuine cross peaks must be centered at intersections of this grid, a fact that reduces the extent of the subsequent search program enormously. The program recognizes cross peaks by examining the symmetry of signals in a test zone centered at a grid intersection. This "symmetry filter" employs a "lowest value algorithm" to discriminate against overlapping responses from adjacent multiplets. A progressive multiplet subtraction scheme provides further suppression of overlap effects. The processed two-dimensional correlation spectrum represents cross peaks as points at the chemical shift coordinates, with some indication of their relative intensities. Alternatively, the information is presented in the form of a correlation table. The authenticity of a given cross peak is judged by a set of "confidence criteria" expressed as numerical parameters. Experimental results are presented for the 400-MHz double-quantum-filtered COSY spectrum of 4-androsten-3,17-dione, a case where there is severe overlap. Copyright 1998 Academic Press.
NASA Astrophysics Data System (ADS)
do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo
2018-01-01
This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.
Wet particle source identification and reduction using a new filter cleaning process
NASA Astrophysics Data System (ADS)
Umeda, Toru; Morita, Akihiko; Shimizu, Hideki; Tsuzuki, Shuichi
2014-03-01
Wet particle reduction during filter installation and start-up aligns closely with initiatives to reduce both chemical consumption and preventative maintenance time. The present study focuses on the effects of filter materials cleanliness on wet particle defectivity through evaluation of filters that have been treated with a new enhanced cleaning process focused on organic compounds reduction. Little difference in filter performance is observed between the two filter types at a size detection threshold of 60 nm, while clear differences are observed at that of 26 nm. It can be suggested that organic compounds can be identified as a potential source of wet particles. Pall recommends filters that have been treated with the special cleaning process for applications with a critical defect size of less than 60 nm. Standard filter products are capable to satisfy wet particle defect performance criteria in less critical lithography applications.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
An Optically Implemented Kalman Filter Algorithm.
1983-12-01
8b. OFFICE SYMOOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER 8c. ADDRESS (City, State and ZIP Code ) 10. SOURCE OF FUNDING NOS.______ PROGRAM...are completely speci- fied for the correlation stage to perform the required corre- lation in real time, and the filter stage to perform the lin- ear...performance analy- ses indicated an enhanced ability of the nonadaptive filter to track a realistic distant point source target with an error standard
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
Pols, David H J; Bramer, Wichor M; Bindels, Patrick J E; van de Laar, Floris A; Bohnen, Arthur M
2015-01-01
Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. © 2015 Annals of Family Medicine, Inc.
Signal Processing for Time-Series Functions on a Graph
2018-02-01
as filtering to functions supported on graphs. These methods can be applied to scalar functions with a domain that can be described by a fixed...classical signal processing such as filtering to account for the graph domain. This work essentially divides into 2 basic approaches: graph Laplcian...based filtering and weighted adjacency matrix-based filtering . In Shuman et al.,11 and elaborated in Bronstein et al.,13 filtering operators are
DEVELOPMENT OF AG-1 SECTION FI ON METAL MEDIA FILTERS - 9061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamson, D; Charles A. Waggoner, C
Development of a metal media standard (FI) for ASME AG-1 (Code on Nuclear Air and Gas Treatment) has been under way for almost ten years. This paper will provide a brief history of the development process of this section and a detailed overview of its current content/status. There have been at least two points when dramatic changes have been made in the scope of the document due to feedback from the full Committee on Nuclear Air and Gas Treatment (CONAGT). Development of the proposed section has required resolving several difficult issues associated with scope; namely, filtering efficiency, operating conditions (mediamore » velocity, pressure drop, etc.), qualification testing, and quality control/acceptance testing. A proposed version of Section FI is currently undergoing final revisions prior to being submitted for balloting. The section covers metal media filters of filtering efficiencies ranging from medium (less than 99.97%) to high (99.97% and greater). Two different types of high efficiency filters are addressed; those units intended to be a direct replacement of Section FC fibrous glass HEPA filters and those that will be placed into newly designed systems capable of supporting greater static pressures and differential pressures across the filter elements. Direct replacements of FC HEPA filters in existing systems will be required to meet equivalent qualification and testing requirements to those contained in Section FC. A series of qualification and quality assurance test methods have been identified for the range of filtering efficiencies covered by this proposed standard. Performance characteristics of sintered metal powder vs. sintered metal fiber media are dramatically different with respect to parameters like differential pressures and rigidity of the media. Wide latitude will be allowed for owner specification of performance criteria for filtration units that will be placed into newly designed systems. Such allowances will permit use of the most appropriate metal media for a system as specified by the owner with respect to material of manufacture, media velocity, system maximum static pressure, maximum differential pressure across the filter, and similar parameters.« less
Hernandez, Wilmar; de Vicente, Jesús; Sergiyenko, Oleg Y.; Fernández, Eduardo
2010-01-01
In this paper, the fast least-mean-squares (LMS) algorithm was used to both eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications, and improve the convergence rate of the filtering process based on the conventional LMS algorithm. The response of the accelerometer under test was corrupted by process and measurement noise, and the signal processing stage was carried out by using both conventional filtering, which was already shown in a previous paper, and optimal adaptive filtering. The adaptive filtering process relied on the LMS adaptive filtering family, which has shown to have very good convergence and robustness properties, and here a comparative analysis between the results of the application of the conventional LMS algorithm and the fast LMS algorithm to solve a real-life filtering problem was carried out. In short, in this paper the piezoresistive accelerometer was tested for a multi-frequency acceleration excitation. Due to the kind of test conducted in this paper, the use of conventional filtering was discarded and the choice of one adaptive filter over the other was based on the signal-to-noise ratio improvement and the convergence rate. PMID:22315579
NASA Astrophysics Data System (ADS)
Bílek, Petr; Hrůza, Jakub
2018-06-01
This paper deals with an optimization of the cleaning process on a liquid flat-sheet filter accompanied by visualization of the inlet side of a filter. The cleaning process has a crucial impact on the hydrodynamic properties of flat-sheet filters. Cleaning methods avoid depositing of particles on the filter surface and forming a filtration cake. Visualization significantly helps to optimize the cleaning methods, because it brings new overall view on the filtration process in time. The optical method, described in the article, enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. Visualization is a strong tool for investigation of the processes on filters in details and it is also possible to determine concentration of particles after an image analysis. The impact of air flow rate, inverse pressure drop and duration on the cleaning mechanism is investigated in the article. Images of the cleaning process are compared to the hydrodynamic data. The tests are carried out on a pilot filtration setup for waste water treatment.
Stable Kalman filters for processing clock measurement data
NASA Technical Reports Server (NTRS)
Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.
1989-01-01
Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.
SkyMapper Filter Set: Design and Fabrication of Large-Scale Optical Filters
NASA Astrophysics Data System (ADS)
Bessell, Michael; Bloxham, Gabe; Schmidt, Brian; Keller, Stefan; Tisserand, Patrick; Francis, Paul
2011-07-01
The SkyMapper Southern Sky Survey will be conducted from Siding Spring Observatory with u, v, g, r, i, and z filters that comprise glued glass combination filters with dimensions of 309 × 309 × 15 mm. In this article we discuss the rationale for our bandpasses and physical characteristics of the filter set. The u, v, g, and z filters are entirely glass filters, which provide highly uniform bandpasses across the complete filter aperture. The i filter uses glass with a short-wave pass coating, and the r filter is a complete dielectric filter. We describe the process by which the filters were constructed, including the processes used to obtain uniform dielectric coatings and optimized narrowband antireflection coatings, as well as the technique of gluing the large glass pieces together after coating using UV transparent epoxy cement. The measured passbands, including extinction and CCD QE, are presented.
Magnetic topological analysis of coronal bright points
NASA Astrophysics Data System (ADS)
Galsgaard, K.; Madjarska, M. S.; Moreno-Insertis, F.; Huang, Z.; Wiegelmann, T.
2017-10-01
Context. We report on the first of a series of studies on coronal bright points which investigate the physical mechanism that generates these phenomena. Aims: The aim of this paper is to understand the magnetic-field structure that hosts the bright points. Methods: We use longitudinal magnetograms taken by the Solar Optical Telescope with the Narrowband Filter Imager. For a single case, magnetograms from the Helioseismic and Magnetic Imager were added to the analysis. The longitudinal magnetic field component is used to derive the potential magnetic fields of the large regions around the bright points. A magneto-static field extrapolation method is tested to verify the accuracy of the potential field modelling. The three dimensional magnetic fields are investigated for the presence of magnetic null points and their influence on the local magnetic domain. Results: In nine out of ten cases the bright point resides in areas where the coronal magnetic field contains an opposite polarity intrusion defining a magnetic null point above it. We find that X-ray bright points reside, in these nine cases, in a limited part of the projected fan-dome area, either fully inside the dome or expanding over a limited area below which typically a dominant flux concentration resides. The tenth bright point is located in a bipolar loop system without an overlying null point. Conclusions: All bright points in coronal holes and two out of three bright points in quiet Sun regions are seen to reside in regions containing a magnetic null point. An as yet unidentified process(es) generates the brigh points in specific regions of the fan-dome structure. The movies are available at http://www.aanda.org
Jeanne, Nicolas; Saliou, Adrien; Carcenac, Romain; Lefebvre, Caroline; Dubois, Martine; Cazabat, Michelle; Nicot, Florence; Loiseau, Claire; Raymond, Stéphanie; Izopet, Jacques; Delobel, Pierre
2015-01-01
HIV-1 coreceptor usage must be accurately determined before starting CCR5 antagonist-based treatment as the presence of undetected minor CXCR4-using variants can cause subsequent virological failure. Ultra-deep pyrosequencing of HIV-1 V3 env allows to detect low levels of CXCR4-using variants that current genotypic approaches miss. However, the computation of the mass of sequence data and the need to identify true minor variants while excluding artifactual sequences generated during amplification and ultra-deep pyrosequencing is rate-limiting. Arbitrary fixed cut-offs below which minor variants are discarded are currently used but the errors generated during ultra-deep pyrosequencing are sequence-dependant rather than random. We have developed an automated processing of HIV-1 V3 env ultra-deep pyrosequencing data that uses biological filters to discard artifactual or non-functional V3 sequences followed by statistical filters to determine position-specific sensitivity thresholds, rather than arbitrary fixed cut-offs. It allows to retain authentic sequences with point mutations at V3 positions of interest and discard artifactual ones with accurate sensitivity thresholds. PMID:26585833
Jeanne, Nicolas; Saliou, Adrien; Carcenac, Romain; Lefebvre, Caroline; Dubois, Martine; Cazabat, Michelle; Nicot, Florence; Loiseau, Claire; Raymond, Stéphanie; Izopet, Jacques; Delobel, Pierre
2015-11-20
HIV-1 coreceptor usage must be accurately determined before starting CCR5 antagonist-based treatment as the presence of undetected minor CXCR4-using variants can cause subsequent virological failure. Ultra-deep pyrosequencing of HIV-1 V3 env allows to detect low levels of CXCR4-using variants that current genotypic approaches miss. However, the computation of the mass of sequence data and the need to identify true minor variants while excluding artifactual sequences generated during amplification and ultra-deep pyrosequencing is rate-limiting. Arbitrary fixed cut-offs below which minor variants are discarded are currently used but the errors generated during ultra-deep pyrosequencing are sequence-dependant rather than random. We have developed an automated processing of HIV-1 V3 env ultra-deep pyrosequencing data that uses biological filters to discard artifactual or non-functional V3 sequences followed by statistical filters to determine position-specific sensitivity thresholds, rather than arbitrary fixed cut-offs. It allows to retain authentic sequences with point mutations at V3 positions of interest and discard artifactual ones with accurate sensitivity thresholds.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
An Automated Road Roughness Detection from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Kumar, P.; Angelats, E.
2017-05-01
Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.
Noise reduction in single time frame optical DNA maps
Müller, Vilhelm; Westerlund, Fredrik
2017-01-01
In optical DNA mapping technologies sequence-specific intensity variations (DNA barcodes) along stretched and stained DNA molecules are produced. These “fingerprints” of the underlying DNA sequence have a resolution of the order one kilobasepairs and the stretching of the DNA molecules are performed by surface adsorption or nano-channel setups. A post-processing challenge for nano-channel based methods, due to local and global random movement of the DNA molecule during imaging, is how to align different time frames in order to produce reproducible time-averaged DNA barcodes. The current solutions to this challenge are computationally rather slow. With high-throughput applications in mind, we here introduce a parameter-free method for filtering a single time frame noisy barcode (snap-shot optical map), measured in a fraction of a second. By using only a single time frame barcode we circumvent the need for post-processing alignment. We demonstrate that our method is successful at providing filtered barcodes which are less noisy and more similar to time averaged barcodes. The method is based on the application of a low-pass filter on a single noisy barcode using the width of the Point Spread Function of the system as a unique, and known, filtering parameter. We find that after applying our method, the Pearson correlation coefficient (a real number in the range from -1 to 1) between the single time-frame barcode and the time average of the aligned kymograph increases significantly, roughly by 0.2 on average. By comparing to a database of more than 3000 theoretical plasmid barcodes we show that the capabilities to identify plasmids is improved by filtering single time-frame barcodes compared to the unfiltered analogues. Since snap-shot experiments and computational time using our method both are less than a second, this study opens up for high throughput optical DNA mapping with improved reproducibility. PMID:28640821
Unique Spectroscopy and Imaging of Mars with the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Villanueva, Geronimo L.; Altieri, Francesca; Clancy, R. Todd; Encrenaz, Therese; Fouchet, Thierry; Hartogh, Paul; Lellouch, Emmanuel; Lopez-Valverde, Miguel A.; Mumma, Michael J.; Novak, Robert E.;
2016-01-01
In this paper, we summarize the main capabilities of the James Webb Space Telescope (JWST) for performing observations of Mars. The distinctive vantage point of JWST at the Sun-Earth Lagrange point (L2) will allow sampling the full observable disk, permitting the study of short-term phenomena, diurnal processes (across the east-west axis), and latitudinal processes between the hemispheres (including seasonal effects) with excellent spatial resolutions (0.''07 at 2 micron). Spectroscopic observations will be achievable in the 0.7-5 micron spectral region with NIRSpec at a maximum resolving power of 2700 and with 8000 in the 1-1.25 micron range. Imaging will be attainable with the Near-Infrared Camera at 4.3 micrometers and with two narrow filters near 2 micron, while the nightside will be accessible with several filters in 0.5 to 2 micron. Such a powerful suite of instruments will be a major asset for the exploration and characterization of Mars. Some science cases include the mapping of the water D/H ratio, investigations of the Martian mesosphere via the characterization of the non-local thermodynamic equilibrium CO2 emission at 4.3 micron, studies of chemical transport via observations of the O2 nightglow at 1.27 micron, high-cadence mapping of the variability dust and water-ice clouds, and sensitive searches for trace species and hydrated features on the Martian surface. In-flight characterization of the instruments may allow for additional science opportunities.
Unique Spectroscopy and Imaging of Terrestrial Planets with JWST
NASA Astrophysics Data System (ADS)
Villanueva, Geronimo Luis; JWST Mars Team
2017-06-01
In this talk, I will present the main capabilities of the James Webb Space Telescope (JWST) for performing observations of terrestrial planets, using Mars as a test case. The distinctive vantage point of JWST at the Sun-Earth Lagrange point (L2) will allow sampling the full observable disk, permitting the study of short-term phenomena, diurnal processes (across the East-West axis) and latitudinal processes between the hemispheres (including seasonal effects) with excellent spatial resolutions (0.07 arcsec at 2 um). Spectroscopic observations will be achievable in the 0.7-5 um spectral region with NIRSpec at a maximum resolving power of 2700, and with 8000 in the 1-1.25 um range. Imaging will be attainable with NIRCam at 4.3 um and with two narrow filters near 2 um, while the nightside will be accessible with several filters in the 0.5 to 2 um. Such a powerful suite of instruments will be a major asset for the exploration and characterization of Mars, and terrestrial planets in general. Some science cases include the mapping of the water D/H ratio, investigations of the Martian mesosphere via the characterization of the non-LTE CO2 emission at 4.3 um, studies of chemical transport via observations of the O2 nightglow at 1.27 um, high cadence mapping of the variability dust and water ice clouds, and sensitive searches for trace species and hydrated features on the planetary surface.
Autonomous Pointing Control of a Large Satellite Antenna Subject to Parametric Uncertainty
Wu, Shunan; Liu, Yufei; Radice, Gianmarco; Tan, Shujun
2017-01-01
With the development of satellite mobile communications, large antennas are now widely used. The precise pointing of the antenna’s optical axis is essential for many space missions. This paper addresses the challenging problem of high-precision autonomous pointing control of a large satellite antenna. The pointing dynamics are firstly proposed. The proportional–derivative feedback and structural filter to perform pointing maneuvers and suppress antenna vibrations are then presented. An adaptive controller to estimate actual system frequencies in the presence of modal parameters uncertainty is proposed. In order to reduce periodic errors, the modified controllers, which include the proposed adaptive controller and an active disturbance rejection filter, are then developed. The system stability and robustness are analyzed and discussed in the frequency domain. Numerical results are finally provided, and the results have demonstrated that the proposed controllers have good autonomy and robustness. PMID:28287450
Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
NASA Technical Reports Server (NTRS)
2008-01-01
This image, and many like it, are one way NASA's Phoenix Mars Lander is measuring trace amounts of water vapor in the atmosphere over far-northern Mars. Phoenix's Surface Stereo Imager (SSI) uses solar filters, or filters designed to image the sun, to make these images. The camera is aimed at the sky for long exposures. SSI took this image as a test on June 9, 2008, which was the Phoenix mission's 15th Martian day, or sol, since landing, at 5:20 p.m. local solar time. The camera was pointed about 38 degrees above the horizon. The white dots in the sky are detector dark current that will be removed during image processing and analysis. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin SpaceStudy of gravity and magnetic anomalies using MAGSAT data
NASA Technical Reports Server (NTRS)
Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)
1981-01-01
The results of modeling satellite-elevation magnetic and gravity data using the constraints imposed by near surface data and seismic evidence shows that the magnetic minimum can be accounted for by either an intracrustal lithologic variation or by an upwarp of the Curie point isotherm. The long wavelength anomalies of the NOO's-vector magnetic survey of the conterminous U.S. were contoured and processed by various frequency filters to enhance particular characteristics. A preliminary inversion of the data was completed and the anomaly field calculated at 450 km from the equivalent magnet sources to compare with the POGO satellite data. Considerable progress was made in studing the satellite magnetic data of South America and adjacent marine areas. Preliminary versions of the 1 deg free-air gravity anomaly map (20 m gal contour interval) and the high cut (lambda approximately 8 deg) filtered anomaly maps are included.
NASA Astrophysics Data System (ADS)
Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.
2013-12-01
Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Optical design of the lightning imager for MTG
NASA Astrophysics Data System (ADS)
Lorenzini, S.; Bardazzi, R.; Di Giampietro, M.; Feresin, F.; Taccola, M.; Cuevas, L. P.
2017-11-01
The Lightning Imager for Meteosat Third Generation is an optical payload with on-board data processing for the detection of lightning. The instrument will provide a global monitoring of lightning events over the full Earth disk from geostationary orbit and will operate in day and night conditions. The requirements of the large field of view together with the high detection efficiency with small and weak optical pulses superimposed to a much brighter and highly spatial and temporal variable background (full operation during day and night conditions, seasonal variations and different albedos between clouds oceans and lands) are driving the design of the optical instrument. The main challenge is to distinguish a true lightning from false events generated by random noise (e.g. background shot noise) or sun glints diffusion or signal variations originated by microvibrations. This can be achieved thanks to a `multi-dimensional' filtering, simultaneously working on the spectral, spatial and temporal domains. The spectral filtering is achieved with a very narrowband filter centred on the bright lightning O2 triplet line (777.4 nm +/- 0.17 nm). The spatial filtering is achieved with a ground sampling distance significantly smaller (between 4 and 5 km at sub satellite pointing) than the dimensions of a typical lightning pulse. The temporal filtering is achieved by sampling continuously the Earth disk within a period close to 1 ms. This paper presents the status of the optical design addressing the trade-off between different configurations and detailing the design and the analyses of the current baseline. Emphasis is given to the discussion of the design drivers and the solutions implemented in particular concerning the spectral filtering and the optimisation of the signal to noise ratio.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Liu, Teng; Yuan, Yunbin
2017-11-01
The integer ambiguity resolution enabled precise point positioning (PPP-RTK) has been proven advantageous in a wide range of applications. The realization of PPP-RTK concerns the isolation of satellite phase biases (SPBs) and other corrections from a network of Global Positioning System (GPS) reference receivers. This is generally based on Kalman filter in order to achieve real-time capability, in which proper modeling of the dynamics of various types of unknowns remains crucial. This paper seeks to gain insight into how to reasonably deal with the dynamic behavior of the estimable receiver phase biases (RPBs). Using dual-frequency GPS data collected at six colocated receivers over days 50-120 of 2015, we analyze the 30-s epoch-by-epoch estimates of L1 and wide-lane (WL) RPBs for each receiver pair. The dynamics observed in these estimates are a combined effect of three factors, namely the random measurement noise, the multipath and the ambient temperature. The first factor can be overcome by turning to a real-time filter and the second by considering the use of a sidereal filtering. The third factor has an effect only on the WL, and this effect appears to be linear. After accounting for these three factors, the low-pass-filtered, sidereal-filtered, epoch-by-epoch estimates of L1 RPBs follow a random walk process, whereas those of WL RPBs are constant over time. Properly modeling the dynamics of RPBs is vital, as it ensures the best convergence of the Kalman-filtered, between-satellite single-differenced SPB estimates to their correct values and, in turn, shortens the time-to-first-fix at user side.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Liu, Teng; Yuan, Yunbin
2018-06-01
The integer ambiguity resolution enabled precise point positioning (PPP-RTK) has been proven advantageous in a wide range of applications. The realization of PPP-RTK concerns the isolation of satellite phase biases (SPBs) and other corrections from a network of Global Positioning System (GPS) reference receivers. This is generally based on Kalman filter in order to achieve real-time capability, in which proper modeling of the dynamics of various types of unknowns remains crucial. This paper seeks to gain insight into how to reasonably deal with the dynamic behavior of the estimable receiver phase biases (RPBs). Using dual-frequency GPS data collected at six colocated receivers over days 50-120 of 2015, we analyze the 30-s epoch-by-epoch estimates of L1 and wide-lane (WL) RPBs for each receiver pair. The dynamics observed in these estimates are a combined effect of three factors, namely the random measurement noise, the multipath and the ambient temperature. The first factor can be overcome by turning to a real-time filter and the second by considering the use of a sidereal filtering. The third factor has an effect only on the WL, and this effect appears to be linear. After accounting for these three factors, the low-pass-filtered, sidereal-filtered, epoch-by-epoch estimates of L1 RPBs follow a random walk process, whereas those of WL RPBs are constant over time. Properly modeling the dynamics of RPBs is vital, as it ensures the best convergence of the Kalman-filtered, between-satellite single-differenced SPB estimates to their correct values and, in turn, shortens the time-to-first-fix at user side.
An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing
NASA Astrophysics Data System (ADS)
Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin
2018-02-01
The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.
Implementation of a Parallel Kalman Filter for Stratospheric Chemical Tracer Assimilation
NASA Technical Reports Server (NTRS)
Chang, Lang-Ping; Lyster, Peter M.; Menard, R.; Cohn, S. E.
1998-01-01
A Kalman filter for the assimilation of long-lived atmospheric chemical constituents has been developed for two-dimensional transport models on isentropic surfaces over the globe. An important attribute of the Kalman filter is that it calculates error covariances of the constituent fields using the tracer dynamics. Consequently, the current Kalman-filter assimilation is a five-dimensional problem (coordinates of two points and time), and it can only be handled on computers with large memory and high floating point speed. In this paper, an implementation of the Kalman filter for distributed-memory, message-passing parallel computers is discussed. Two approaches were studied: an operator decomposition and a covariance decomposition. The latter was found to be more scalable than the former, and it possesses the property that the dynamical model does not need to be parallelized, which is of considerable practical advantage. This code is currently used to assimilate constituent data retrieved by limb sounders on the Upper Atmosphere Research Satellite. Tests of the code examined the variance transport and observability properties. Aspects of the parallel implementation, some timing results, and a brief discussion of the physical results will be presented.
PERFORMANCE IMPROVEMENT OF CROSS-FLOW FILTRATION FOR HIGH LEVEL WASTE TREATMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, M.; Nash, C.; Poirier, M.
2011-01-12
In the interest of accelerating waste treatment processing, the DOE has funded studies to better understand filtration with the goal of improving filter fluxes in existing cross-flow equipment. The Savannah River National Laboratory (SRNL) was included in those studies, with a focus on start-up techniques, filter cake development, the application of filter aids (cake forming solid precoats), and body feeds (flux enhancing polymers). This paper discusses the progress of those filter studies. Cross-flow filtration is a key process step in many operating and planned waste treatment facilities to separate undissolved solids from supernate slurries. This separation technology generally has themore » advantage of self-cleaning through the action of wall shear stress created by the flow of waste slurry through the filter tubes. However, the ability of filter wall self-cleaning depends on the slurry being filtered. Many of the alkaline radioactive wastes are extremely challenging to filtration, e.g., those containing compounds of aluminum and iron, which have particles whose size and morphology reduce permeability. Unfortunately, low filter flux can be a bottleneck in waste processing facilities such as the Savannah River Modular Caustic Side Solvent Extraction Unit and the Hanford Waste Treatment Plant. Any improvement to the filtration rate would lead directly to increased throughput of the entire process. To date increased rates are generally realized by either increasing the cross-flow filter axial flowrate, limited by pump capacity, or by increasing filter surface area, limited by space and increasing the required pump load. SRNL set up both dead-end and cross-flow filter tests to better understand filter performance based on filter media structure, flow conditions, filter cleaning, and several different types of filter aids and body feeds. Using non-radioactive simulated wastes, both chemically and physically similar to the actual radioactive wastes, the authors performed several tests to demonstrate increases in filter performance. With the proper use of filter flow conditions and filter enhancers, filter flow rates can be increased over rates currently realized today.« less
Improving Image Matching by Reducing Surface Reflections Using Polarising Filter Techniques
NASA Astrophysics Data System (ADS)
Conen, N.; Hastedt, H.; Kahmen, O.; Luhmann, T.
2018-05-01
In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera's orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002) using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm) with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.
Pilot-scale tests of HEME and HEPA dissolution process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qureshi, Z.H.; Strege, D.K.
A series of pilot-scale demonstration tests for the dissolution of High Efficiency Mist Eliminators (HEME`s) and High Efficiency Particulate Airfilters (HEPA) were performed on a 1/5th linear scale. These fiberglass filters are to be used in the Defense Waste Processing Facility (DWPF) to decontaminate the effluents from the off-gases generated during the feed preparation process and vitrification. When removed, these filters will be dissolved in the Decontamination Waste Treatment Tank (DWTT) using 5 wt% NaOH solution. The contaminated fiberglass is converted to an aqueous stream which will be transferred to the waste tanks. The filter metal structure will be rinsedmore » with process water before its disposal as low-level solid waste. The pilot-scale study reported here successfully demonstrated a simple one step process using 5 wt% NaOH solution. The proposed process requires the installation of a new water spray ring with 30 nozzles. In addition to the reduced waste generated, the total process time is reduced to 48 hours only (66% saving in time). The pilot-scale tests clearly demonstrated that the dissolution process of HEMEs has two stages - chemical digestion of the filter and mechanical erosion of the digested filter. The digestion is achieved by a boiling 5 wt% caustic solutions, whereas the mechanical break down of the digested filter is successfully achieved by spraying process water on the digested filter. An alternate method of breaking down the digested filter by increased air sparging of the solution was found to be marginally successful are best. The pilot-scale tests also demonstrated that the products of dissolution are easily pumpable by a centrifugal pump.« less
FILTSoft: A computational tool for microstrip planar filter design
NASA Astrophysics Data System (ADS)
Elsayed, M. H.; Abidin, Z. Z.; Dahlan, S. H.; Cholan N., A.; Ngu, Xavier T. I.; Majid, H. A.
2017-09-01
Filters are key component of any communication system to control spectrum and suppress interferences. Designing a filter involves long process as well as good understanding of the basic hardware technology. Hence this paper introduces an automated design tool based on Matlab-GUI, called the FILTSoft (acronym for Filter Design Software) to ease the process. FILTSoft is a user friendly filter design tool to aid, guide and expedite calculations from lumped elements level to microstrip structure. Users just have to provide the required filter specifications as well as the material description. FILTSoft will calculate and display the lumped element details, the planar filter structure, and the expected filter's response. An example of a lowpass filter design was calculated using FILTSoft and the results were validated through prototype measurement for comparison purposes.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Mt-Insar Landslide Monitoring with the Aid of Homogeneous Pixels Filter
NASA Astrophysics Data System (ADS)
Liu, X. J.; Zhao, C. Y.; Wang, B. H.; Zhu, W. F.
2018-04-01
SAR interferograms are often contaminated by random noises related to temporal decorrelation, geometrical decorrelation and thermal noises, which makes the fringes obscured and greatly decreases the density of the coherent target and the accuracy of InSAR deformation results, especially for the landslide monitoring in vegetated region and in rainy season. Two different SAR interferogram filtering methods, that is Goldstein filter and homogeneous pixels filter, for one specific landslide are compared. The results show that homogeneous pixels filter is better than Goldstein one for small-scale loess landslide monitoring, which can increase the density of monitoring points. Moreover, the precision of InSAR result can reach millimeter by comparing with GPS time series measurements.
Improved characterization of slow-moving landslides by means of adaptive NL-InSAR filtering
NASA Astrophysics Data System (ADS)
Albiol, David; Iglesias, Rubén.; Sánchez, Francisco; Duro, Javier
2014-10-01
Advanced remote sensing techniques based on space-borne Synthetic Aperture Radar (SAR) have been developed during the last decade showing their applicability for the monitoring of surface displacements in landslide areas. This paper presents an advanced Persistent Scatterer Interferometry (PSI) processing based on the Stable Point Network (SPN) technique, developed by the company Altamira-Information, for the monitoring of an active slowmoving landslide in the mountainous environment of El Portalet, Central Spanish Pyrenees. For this purpose, two TerraSAR-X data sets acquired in ascending mode corresponding to the period from April to November 2011, and from August to November 2013, respectively, are employed. The objective of this work is twofold. On the one hand, the benefits of employing Nonlocal Interferomtric SAR (NL-InSAR) adaptive filtering techniques over vegetated scenarios to maximize the chances of detecting natural distributed scatterers, such as bare or rocky areas, and deterministic point-like scatterers, such as man-made structures or poles, is put forward. In this context, the final PSI displacement maps retrieved with the proposed filtering technique are compared in terms of pixels' density and quality with classical PSI, showing a significant improvement. On the other hand, since SAR systems are only sensitive to detect displacements in the line-of-sight (LOS) direction, the importance of projecting the PSI displacement results retrieved along the steepest gradient of the terrain slope is discussed. The improvements presented in this paper are particularly interesting in these type of applications since they clearly allow to better determine the extension and dynamics of complex landslide phenomena.
An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.
Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G
2018-06-04
Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Real time microcontroller implementation of an adaptive myoelectric filter.
Bagwell, P J; Chappell, P H
1995-03-01
This paper describes a real time digital adaptive filter for processing myoelectric signals. The filter time constant is automatically selected by the adaptation algorithm, giving a significant improvement over linear filters for estimating the muscle force and controlling a prosthetic device. Interference from mains sources often produces problems for myoelectric processing, and so 50 Hz and all harmonic frequencies are reduced by an averaging filter and differential process. This makes practical electrode placement and contact less critical and time consuming. An economic real time implementation is essential for a prosthetic controller, and this is achieved using an Intel 80C196KC microcontroller.
Intrinsic low pass filtering improves signal-to-noise ratio in critical-point flexure biosensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Ankit; Alam, Muhammad Ashraful, E-mail: alam@purdue.edu
2014-08-25
A flexure biosensor consists of a suspended beam and a fixed bottom electrode. The adsorption of the target biomolecules on the beam changes its stiffness and results in change of beam's deflection. It is now well established that the sensitivity of sensor is maximized close to the pull-in instability point, where effective stiffness of the beam vanishes. The question: “Do the signal-to-noise ratio (SNR) and the limit-of-detection (LOD) also improve close to the instability point?”, however remains unanswered. In this article, we systematically analyze the noise response to evaluate SNR and establish LOD of critical-point flexure sensors. We find thatmore » a flexure sensor acts like an effective low pass filter close to the instability point due to its relatively small resonance frequency, and rejects high frequency noise, leading to improved SNR and LOD. We believe that our conclusions should establish the uniqueness and the technological relevance of critical-point biosensors.« less
Real-time volcano monitoring using GNSS single-frequency receivers
NASA Astrophysics Data System (ADS)
Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.
2015-12-01
We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.
Reverse engineering gene regulatory networks from measurement with missing values.
Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong
2016-12-01
Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
Field testing of a low-cost retrofit filter berm to treat stormwater runoff contaminants.
DOT National Transportation Integrated Search
2008-09-01
The goal of this cooperative effort between MaineDOT and the University of New Hampshire was to test a low-cost : retrofit filter berm that would reduce non-point pollution from highway runoff. The retrofit berm would be easy to : construct using rea...
Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory
NASA Astrophysics Data System (ADS)
Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro
2016-04-01
Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Kalnay, E.; Navon, I. M.
1985-01-01
A normal modes expansion technique is applied to perform high latitude filtering in the GLAS fourth order global shallow water model with orography. The maximum permissible time step in the solution code is controlled by the frequency of the fastest propagating mode, which can be a gravity wave. Numerical methods are defined for filtering the data to identify the number of gravity modes to be included in the computations in order to obtain the appropriate zonal wavenumbers. The performances of the model with and without the filter, and with a time tendency and a prognostic field filter are tested with simulations of the Northern Hemisphere winter. The normal modes expansion technique is shown to leave the Rossby modes intact and permit 3-5 day predictions, a range not possible with the other high-latitude filters.
Robust and Quantized Wiener Filters for p-Point Spectral Classes.
1980-01-01
REPORT DOCUMENTATION, __BEFORE COMPLETING FORM A. REPORT NUMBER ’ 12. GOVT ACCESSION NO. 3 . RECIPIENT’S CATALOG NUMBER AFOSR-TR- 80-0425z__...re School of Electrical Engineerin . 3 - , Philadelphia, PA 19104 ABSTRACT In Section III, we show that a piecewise const- ant filter also possesses...determining the optimum piecewise ters using a band-model for the PSD’s. Poor [ 3 , 4] constant filter. Then, for a particular class of then considered
Tightly Integrating Optical And Inertial Sensors For Navigation Using The UKF
2008-03-01
832. September 2004. 3. Brown , Robert Grover and Patrick Y.C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. John Wiley and Sons...effectiveness of fusing imaging and inertial sensors using an Extended Kalman Filter (EKF) algorithm has been shown in previous research efforts. In this...model assumed by the EKF. In order to cope with divergence problem, the Unscented (Sigma-Point) Kalman Filter (UKF) has been proposed in the literature in
Methodology for modeling the microbial contamination of air filters.
Joe, Yun Haeng; Yoon, Ki Young; Hwang, Jungho
2014-01-01
In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal instantaneous frequencies.
Quantum filtering for multiple diffusive and Poissonian measurements
NASA Astrophysics Data System (ADS)
Emzir, Muhammad F.; Woolley, Matthew J.; Petersen, Ian R.
2015-09-01
We provide a rigorous derivation of a quantum filter for the case of multiple measurements being made on a quantum system. We consider a class of measurement processes which are functions of bosonic field operators, including combinations of diffusive and Poissonian processes. This covers the standard cases from quantum optics, where homodyne detection may be described as a diffusive process and photon counting may be described as a Poissonian process. We obtain a necessary and sufficient condition for any pair of such measurements taken at different output channels to satisfy a commutation relationship. Then, we derive a general, multiple-measurement quantum filter as an extension of a single-measurement quantum filter. As an application we explicitly obtain the quantum filter corresponding to homodyne detection and photon counting at the output ports of a beam splitter.
Low-pass parabolic FFT filter for airborne and satellite lidar signal processing.
Jiao, Zhongke; Liu, Bo; Liu, Enhai; Yue, Yongjian
2015-10-14
In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing.
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Tabak, D.
1979-01-01
The study involves the bank of filters approach to analytical redundancy management since this is amenable to microelectronic implementation. Attention is given to a study of the UD factorized filter to determine if it gives more accurate estimates than the standard Kalman filter when data processing word size is reduced. It is reported that, as the word size is reduced, the effect of modeling error dominates the filter performance of the two filters. However, the UD filter is shown to maintain a slight advantage in tracking performance. It is concluded that because of the UD filter's stability in the serial processing mode, it remains the leading candidate for microelectronic implementation.
A high-order spatial filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-04-01
A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.
Audible acoustics in high-shear wet granulation: application of frequency filtering.
Hansuld, Erin M; Briens, Lauren; McCann, Joe A B; Sayani, Amyn
2009-08-13
Previous work has shown analysis of audible acoustic emissions from high-shear wet granulation has potential as a technique for end-point detection. In this research, audible acoustic emissions (AEs) from three different formulations were studied to further develop this technique as a process analytical technology. Condenser microphones were attached to three different locations on a PMA-10 high-shear granulator (air exhaust, bowl and motor) to target different sound sources. Size, flowability and tablet break load data was collected to support formulator end-point ranges and interpretation of AE analysis. Each formulation had a unique total power spectral density (PSD) profile that was sensitive to granule formation and end-point. Analyzing total PSD in 10 Hz segments identified profiles with reduced run variability and distinct maxima and minima suitable for routine granulation monitoring and end-point control. A partial least squares discriminant analysis method was developed to automate selection of key 10 Hz frequency groups using variable importance to projection. The results support use of frequency refinement as a way forward in the development of acoustic emission analysis for granulation monitoring and end-point control.
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images
Luo, Yaozhong; Liu, Longzhong; Li, Xuelong
2017-01-01
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703
Formation Flying With Decentralized Control in Libration Point Orbits
NASA Technical Reports Server (NTRS)
Folta, David; Carpenter, J. Russell; Wagner, Christoph
2000-01-01
A decentralized control framework is investigated for applicability of formation flying control in libration orbits. The decentralized approach, being non-hierarchical, processes only direct measurement data, in parallel with the other spacecraft. Control is accomplished via linearization about a reference libration orbit with standard control using a Linear Quadratic Regulator (LQR) or the GSFC control algorithm. Both are linearized about the current state estimate as with the extended Kalman filter. Based on this preliminary work, the decentralized approach appears to be feasible for upcoming libration missions using distributed spacecraft.
Parreira, José Gustavo; de Campos, Tércio; Perlingeiro, Jacqueline A Gianinni; Soldá, Silvia C; Assef, José Cesar; Gonçalves, Augusto Canton; Zuffo, Bruno Malteze; Floriano, Caio Gomes; de Oliveira, Erik Haruk; de Oliveira, Renato Vieira Rodrigues; Oliveira, Amanda Lima; de Melo, Caio Gullo; Below, Cristiano; Miranda, Dino R Pérez; Santos, Gabriella Colasuonno; de Almeida, Gabriele Madeira; Brianti, Isabela Campos; Votto, Karina Baruel de Camargo; Schues, Patrick Alexander Sauer; dos Santos, Rafael Gomes; de Figueredo, Sérgio Mazzola Poli; de Araujo, Tatiani Gonçalves; Santos, Bruna do Nascimento; Ferreira, Laura Cardoso Manduca; Tanaka, Giuliana Olivi; Matos, Thiara; da Sousa, Maria Daiana; Augusto, Samara de Souza
2015-01-01
to analyze the implementation of a trauma registry in a university teaching hospital delivering care under the unified health system (SUS), and its ability to identify points for improvement in the quality of care provided. the data collection group comprised students from medicine and nursing courses who were holders of FAPESP scholarships (technical training 1) or otherwise, overseen by the coordinators of the project. The itreg (ECO Sistemas-RJ/SBAIT) software was used as the database tool. Several quality "filters" were proposed to select those cases for review in the quality control process. data for 1344 trauma patients were input to the itreg database between March and November 2014. Around 87.0% of cases were blunt trauma patients, 59.6% had RTS>7.0 and 67% ISS<9. Full records were available for 292 cases, which were selected for review in the quality program. The auditing filters most frequently registered were laparotomy four hours after admission and drainage of acute subdural hematomas four hours after admission. Several points for improvement were flagged, such as control of overtriage of patients, the need to reduce the number of negative imaging exams, the development of protocols for achieving central venous access, and management of major TBI. the trauma registry provides a clear picture of the points to be improved in trauma patient care, however, there are specific peculiarities for implementing this tool in the Brazilian milieu.
Steinmeyer, P.A.
1992-11-24
A radiation filter for filtering radiation beams of wavelengths within a preselected range of wavelengths comprises a radiation transmissive substrate and an attenuating layer deposited on the substrate. The attenuating layer may be deposited by a sputtering process or a vacuum process. Beryllium may be used as the radiation transmissive substrate. In addition, a second radiation filter comprises an attenuating layer interposed between a pair of radiation transmissive layers. 4 figs.
Steinmeyer, Peter A.
1992-11-24
A radiation filter for filtering radiation beams of wavelengths within a preselected range of wavelengths comprises a radiation transmissive substrate and an attenuating layer deposited on the substrate. The attenuating layer may be deposited by a sputtering process or a vacuum process. Beryllium may be used as the radiation transmissive substrate. In addition, a second radiation filter comprises an attenuating layer interposed between a pair of radiation transmissive layers.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
The application of digital signal processing techniques to a teleoperator radar system
NASA Technical Reports Server (NTRS)
Pujol, A.
1982-01-01
A digital signal processing system was studied for the determination of the spectral frequency distribution of echo signals from a teleoperator radar system. The system consisted of a sample and hold circuit, an analog to digital converter, a digital filter, and a Fast Fourier Transform. The system is interfaced to a 16 bit microprocessor. The microprocessor is programmed to control the complete digital signal processing. The digital filtering and Fast Fourier Transform functions are implemented by a S2815 digital filter/utility peripheral chip and a S2814A Fast Fourier Transform chip. The S2815 initially simulates a low-pass Butterworth filter with later expansion to complete filter circuit (bandpass and highpass) synthesizing.
Efficiency analysis of color image filtering
NASA Astrophysics Data System (ADS)
Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.
2011-12-01
This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.
NASA Astrophysics Data System (ADS)
Buzzicotti, M.; Linkmann, M.; Aluie, H.; Biferale, L.; Brasseur, J.; Meneveau, C.
2018-02-01
The effects of different filtering strategies on the statistical properties of the resolved-to-subfilter scale (SFS) energy transfer are analysed in forced homogeneous and isotropic turbulence. We carry out a-priori analyses of the statistical characteristics of SFS energy transfer by filtering data obtained from direct numerical simulations with up to 20483 grid points as a function of the filter cutoff scale. In order to quantify the dependence of extreme events and anomalous scaling on the filter, we compare a sharp Fourier Galerkin projector, a Gaussian filter and a novel class of Galerkin projectors with non-sharp spectral filter profiles. Of interest is the importance of Galilean invariance and we confirm that local SFS energy transfer displays intermittency scaling in both skewness and flatness as a function of the cutoff scale. Furthermore, we quantify the robustness of scaling as a function of the filtering type.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
NASA Astrophysics Data System (ADS)
Gonzalez, Pablo J.
2017-04-01
Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033
NASA Astrophysics Data System (ADS)
Lundberg, Oskar E.; Nordborg, Anders; Lopez Arteaga, Ines
2016-03-01
A state-dependent contact model including nonlinear contact stiffness and nonlinear contact filtering is used to calculate contact forces and rail vibrations with a time-domain wheel-track interaction model. In the proposed method, the full three-dimensional contact geometry is reduced to a point contact in order to lower the computational cost and to reduce the amount of required input roughness-data. Green's functions including the linear dynamics of the wheel and the track are coupled with a point contact model, leading to a numerically efficient model for the wheel-track interaction. Nonlinear effects due to the shape and roughness of the wheel and the rail surfaces are included in the point contact model by pre-calculation of functions for the contact stiffness and contact filters. Numerical results are compared to field measurements of rail vibrations for passenger trains running at 200 kph on a ballast track. Moreover, the influence of vehicle pre-load and different degrees of roughness excitation on the resulting wheel-track interaction is studied by means of numerical predictions.
New Day for Longest-Working Mars Rover
2018-02-16
NASA's Mars Exploration Rover Opportunity recorded the dawn of the rover's 4,999th Martian day, or sol, with its Panoramic Camera (Pancam) on Feb. 15, 2018, yielding this processed, approximately true-color scene. The view looks across Endeavour Crater, which is about 14 miles (22 kilometers) in diameter, from the inner slope of the crater's western rim. Opportunity has driven a little over 28.02 miles (45.1 kilometers) since it landed in the Meridiani Planum region of Mars in January, 2004, for what was planned as a 90-sol mission. A sol lasts about 40 minutes longer than an Earth day. This view combines three separate Pancam exposures taken through filters centered on wavelengths of 601 microns (red), 535 microns (green) and 482 microns (blue). It was processed at Texas A&M University to correct for some of the oversaturation and glare, though it still includes some artifacts from pointing a camera with a dusty lens at the Sun. The processing includes radiometric correction, interpolation to fill in gaps in the data caused by saturation due to Sun's brightness, and warping the red and blue images to undo the effects of time passing between each of the exposures through different filters. https://photojournal.jpl.nasa.gov/catalog/PIA22221
GPS Data Filtration Method for Drive Cycle Analysis Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Earleywine, M.
2013-02-01
When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less
NASA Technical Reports Server (NTRS)
Brooks, R. L. (Inventor)
1979-01-01
A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation
NASA Technical Reports Server (NTRS)
Galante, Joseph M.; Sanner, Robert M.
2012-01-01
Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.
Enhanced microlithography using coated objectives and image duplication
NASA Astrophysics Data System (ADS)
Erdelyi, Miklos; Bor, Zsolt; Szabo, Gabor; Tittel, Frank K.
1998-06-01
Two processes were investigated theoretically using both a scalar wave optics model and a microlithography simulation tool (Solid-C). The first method introduces a phase- transmission filter into the exit pupil plane. The results of both the scalar optics calculation (aerial image) and the Solid-C simulation (resist image) show that the final image profile is optimum, when the exit pupil plane filter is divided into two zones with the inner zone having a phase retardation of (pi) rad with respect to the outer one and the ratio of the radii of the zones is 0.3. Using this optimized filter for the fabrication of isolated contact holes, the focus-exposure process window increases significantly, and the depth of focus (DOF) can be enhanced by a factor of 1.5 to 2. The second technique enhances the DOF of the aerial image by means of a birefringent plate inserted between the projection lens and the wafer. As the shift in focus introduced by the plate strongly depends on the refractive index, two focal points will appear when using a birefringent plate instead of an isotropic plate: the first one is created by the ordinary, and the second one is created by the extraordinary ray. The distance between these images can be controlled by the thickness of the plate. The results of the calculations show that application of a thin but strongly birefringent material is a better candidate than using a slightly birefringent but thick plate, since aberrations proportional to the thickness can cause undesirable effects.
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
McFarland, Dennis J; Krusienski, Dean J; Wolpaw, Jonathan R
2006-01-01
The Wadsworth brain-computer interface (BCI), based on mu and beta sensorimotor rhythms, uses one- and two-dimensional cursor movement tasks and relies on user training. This is a real-time closed-loop system. Signal processing consists of channel selection, spatial filtering, and spectral analysis. Feature translation uses a regression approach and normalization. Adaptation occurs at several points in this process on the basis of different criteria and methods. It can use either feedforward (e.g., estimating the signal mean for normalization) or feedback control (e.g., estimating feature weights for the prediction equation). We view this process as the interaction between a dynamic user and a dynamic system that coadapt over time. Understanding the dynamics of this interaction and optimizing its performance represent a major challenge for BCI research.
Application of optical broadband monitoring to quasi-rugate filters by ion-beam sputtering
NASA Astrophysics Data System (ADS)
Lappschies, Marc; Görtz, Björn; Ristau, Detlev
2006-03-01
Methods for the manufacture of rugate filters by the ion-beam-sputtering process are presented. The first approach gives an example of a digitized version of a continuous-layer notch filter. This method allows the comparison of the basic theory of interference coatings containing thin layers with practical results. For the other methods, a movable zone target is employed to fabricate graded and gradual rugate filters. The examples demonstrate the potential of broadband optical monitoring in conjunction with the ion-beam-sputtering process. First-characterization results indicate that these types of filter may exhibit higher laser-induced damage-threshold values than those of classical filters.
Linear-phase delay filters for ultra-low-power signal processing in neural recording implants.
Gosselin, Benoit; Sawan, Mohamad; Kerherve, Eric
2010-06-01
We present the design and implementation of linear-phase delay filters for ultra-low-power signal processing in neural recording implants. We use these filters as low-distortion delay elements along with an automatic biopotential detector to perform integral waveform extraction and efficient power management. The presented delay elements are realized employing continuous-time OTA-C filters featuring 9th-order equiripple transfer functions with constant group delay. Such analog delay enables processing neural waveforms with reduced overhead compared to a digital delay since it does not requires sampling and digitization. It uses an allpass transfer function for achieving wider constant-delay bandwidth than all-pole does. Two filters realizations are compared for implementing the delay element: the Cascaded structure and the Inverse follow-the-leader feedback filter. Their respective strengths and drawbacks are assessed by modeling parasitics and non-idealities of OTAs, and by transistor-level simulations. A budget of 200 nA is used in both filters. Experimental measurements with the chosen filter topology are presented and discussed.
Impact of axial velocity and transmembrane pressure (TMP) on ARP filter performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Burket, P.
2016-02-29
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). Recently, the low filter flux through the ARP of approximately 5 gallons per minute has limited the rate at which radioactive liquid waste can be treated. Salt Batch 6 had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. One potential method for increasing filter flux is to adjust the axial velocity andmore » transmembrane pressure (TMP). SRR requested SRNL to conduct bench-scale filter tests to evaluate the effects of axial velocity and transmembrane pressure on crossflow filter flux. The objective of the testing was to determine whether increasing the axial velocity at the ARP could produce a significant increase in filter flux. The authors conducted the tests by preparing slurries containing 6.6 M sodium Salt Batch 6 supernate and 2.5 g MST/L, processing the slurry through a bench-scale crossflow filter unit at varying axial velocity and TMP, and measuring filter flux as a function of time.« less
Kakakhel, M B; Jirasek, A; Johnston, H; Kairn, T; Trapp, J V
2017-03-01
This study evaluated the feasibility of combining the 'zero-scan' (ZS) X-ray computed tomography (CT) based polymer gel dosimeter (PGD) readout with adaptive mean (AM) filtering for improving the signal to noise ratio (SNR), and to compare these results with available average scan (AS) X-ray CT readout techniques. NIPAM PGD were manufactured, irradiated with 6 MV photons, CT imaged and processed in Matlab. AM filter for two iterations, with 3 × 3 and 5 × 5 pixels (kernel size), was used in two scenarios (a) the CT images were subjected to AM filtering (pre-processing) and these were further employed to generate AS and ZS gel images, and (b) the AS and ZS images were first reconstructed from the CT images and then AM filtering was carried out (post-processing). SNR was computed in an ROI of 30 × 30 for different pre and post processing cases. Results showed that the ZS technique combined with AM filtering resulted in improved SNR. Using the previously-recommended 25 images for reconstruction the ZS pre-processed protocol can give an increase of 44% and 80% in SNR for 3 × 3 and 5 × 5 kernel sizes respectively. However, post processing using both techniques and filter sizes introduced blur and a reduction in the spatial resolution. Based on this work, it is possible to recommend that the ZS method may be combined with pre-processed AM filtering using appropriate kernel size, to produce a large increase in the SNR of the reconstructed PGD images.
38. SAND FILTER AT LEFT AND CHLORINATOR AT RIGHT, DOWN ...
38. SAND FILTER AT LEFT AND CHLORINATOR AT RIGHT, DOWN LINE FROM THE RESERVOIR, IMPROVED WATER QUALITY BOTH LOCATED AT 275' ALTITUDE. FROM THIS POINT THE LINES BRANCH INTO KALAUPAP SETTLEMENT TO SUPPLY RESIDENCES AND OTHER BUILDINGS. - Kalaupapa Water Supply System, Waikolu Valley to Kalaupapa Settlement, Island of Molokai, Kalaupapa, Kalawao County, HI
A Case Study of Editorial Filters in Folktales: A Discussion of the "Allerleirauh" Tales in Grimm.
ERIC Educational Resources Information Center
Dollerup, Cay; And Others
1986-01-01
This article discusses editorial "filters" in folktales, specifically the changes ("orientations") which editors deliberately impose on a tale because they want to reach a specific audience. A case in point is the tale called "Allerleirauh," in the Grimm collection, which not only is highly illustrative of editorial…
Navigation of a Satellite Cluster with Realistic Dynamics
1991-12-01
20 2.2.1 Dynamics ( Clohessy - Wiltshire Equations) ............ 21 2.2.2 Iterated, Extended Kalman Filter.................26 iv I1l...8 Figure 4. Point mass and Clohessy - Wiltshire orbits (10 orbits) .......... 16 Figure 5. Real dynamics and Clohessy - Wiltshire orbits (10...filter ..... 31 Figure 8. Comparison of the Clohessy - Wiltshire and truth model solutions
Removal of virus to protozoan sized particles in point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Schilling, Cherylynn; Schreier, Simon; Kohler, Amanda; Scott Summers, R
2010-03-01
The particle removal performance of point-of-use ceramic water filters (CWFs) was characterized in the size range of 0.02-100 microm using carboxylate-coated polystyrene fluorescent microspheres, natural particles and clay. Particles were spiked into dechlorinated tap water, and three successive water batches treated in each of six different CWFs. Particle removal generally increased with increasing size. The removal of virus-sized 0.02 and 0.1 microm spheres were highly variable between the six filters, ranging from 63 to 99.6%. For the 0.5 microm spheres removal was less variable and in the range of 95.1-99.6%, while for the 1, 2, 4.5, and 10 microm spheres removal was >99.6%. Recoating four of the CWFs with colloidal silver solution improved removal of the 0.02 microm spheres, but had no significant effects on the other particle sizes. Log removals of 1.8-3.2 were found for natural turbidity and spiked kaolin clay particles; however, particles as large as 95 microm were detected in filtered water. Copyright 2009 Elsevier Ltd. All rights reserved.
Airborne system for multispectral, multiangle polarimetric imaging.
Bowles, Jeffrey H; Korwan, Daniel R; Montes, Marcos J; Gray, Deric J; Gillis, David B; Lamela, Gia M; Miller, W David
2015-11-01
In this paper, we describe the design, fabrication, calibration, and deployment of an airborne multispectral polarimetric imager. The motivation for the development of this instrument was to explore its ability to provide information about water constituents, such as particle size and type. The instrument is based on four 16 MP cameras and uses wire grid polarizers (aligned at 0°, 45°, 90°, and 135°) to provide the separation of the polarization states. A five-position filter wheel provides for four narrow-band spectral filters (435, 550, 625, and 750 nm) and one blocked position for dark-level measurements. When flown, the instrument is mounted on a programmable stage that provides control of the view angles. View angles that range to ±65° from the nadir have been used. Data processing provides a measure of the polarimetric signature as a function of both the view zenith and view azimuth angles. As a validation of our initial results, we compare our measurements, over water, with the output of a Monte Carlo code, both of which show neutral points off the principle plane. The locations of the calculated and measured neutral points are compared. The random error level in the measured degree of linear polarization (8% at 435) is shown to be better than 0.25%.
A PC-based magnetometer-only attitude and rate determination system for gyroless spacecraft
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.; Deutschmann, J.; Galal, K.
1995-01-01
This paper describes a prototype PC-based system that uses measurements from a three-axis magnetometer (TAM) to estimate the state (three-axis attitude and rates) of a spacecraft given no a priori information other than the mass properties. The system uses two algorithms that estimate the spacecraft's state - a deterministic magnetic-field only algorithm and a Kalman filter for gyroless spacecraft. The algorithms are combined by invoking the deterministic algorithm to generate the spacecraft state at epoch using a small batch of data and then using this deterministic epoch solution as the initial condition for the Kalman filter during the production run. System input comprises processed data that includes TAM and reference magnetic field data. Additional information, such as control system data and measurements from line-of-sight sensors, can be input to the system if available. Test results are presented using in-flight data from two three-axis stabilized spacecraft: Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) (gyroless, Sun-pointing) and Earth Radiation Budget Satellite (ERBS) (gyro-based, Earth-pointing). The results show that, using as little as 700 s of data, the system is capable of accuracies of 1.5 deg in attitude and 0.01 deg/s in rates; i.e., within SAMPEX mission requirements.
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133
High efficiency processing for reduced amplitude zones detection in the HRECG signal
NASA Astrophysics Data System (ADS)
Dugarte, N.; Álvarez, A.; Balacco, J.; Mercado, G.; Gonzalez, A.; Dugarte, E.; Olivares, A.
2016-04-01
Summary - This article presents part of a more detailed research proposed in the medium to long term, with the intention of establishing a new philosophy of electrocardiogram surface analysis. This research aims to find indicators of cardiovascular disease in its early stage that may go unnoticed with conventional electrocardiography. This paper reports the development of a software processing which collect some existing techniques and incorporates novel methods for detection of reduced amplitude zones (RAZ) in high resolution electrocardiographic signal (HRECG).The algorithm consists of three stages, an efficient processing for QRS detection, averaging filter using correlation techniques and a step for RAZ detecting. Preliminary results show the efficiency of system and point to incorporation of techniques new using signal analysis with involving 12 leads.
NASA Astrophysics Data System (ADS)
Prarokijjak, Worasak; Soodchomshom, Bumned
2018-04-01
Spin-valley transport and magnetoresistance are investigated in silicene-based N/TB/N/TB/N junction where N and TB are normal silicene and topological barriers. The topological phase transitions in TB's are controlled by electric, exchange fields and circularly polarized light. As a result, we find that by applying electric and exchange fields, four groups of spin-valley currents are perfectly filtered, directly induced by topological phase transitions. Control of currents, carried by single, double and triple channels of spin-valley electrons in silicene junction, may be achievable by adjusting magnitudes of electric, exchange fields and circularly polarized light. We may identify that the key factor behind the spin-valley current filtered at the transition points may be due to zero and non-zero Chern numbers. Electrons that are allowed to transport at the transition points must obey zero-Chern number which is equivalent to zero mass and zero-Berry's curvature, while electrons with non-zero Chern number are perfectly suppressed. Very large magnetoresistance dips are found directly induced by topological phase transition points. Our study also discusses the effect of spin-valley dependent Hall conductivity at the transition points on ballistic transport and reveals the potential of silicene as a topological material for spin-valleytronics.
Beam alignment based on two-dimensional power spectral density of a near-field image.
Wang, Shenzhen; Yuan, Qiang; Zeng, Fa; Zhang, Xin; Zhao, Junpu; Li, Kehong; Zhang, Xiaolu; Xue, Qiao; Yang, Ying; Dai, Wanjun; Zhou, Wei; Wang, Yuanchen; Zheng, Kuixing; Su, Jingqin; Hu, Dongxia; Zhu, Qihua
2017-10-30
Beam alignment is crucial to high-power laser facilities and is used to adjust the laser beams quickly and accurately to meet stringent requirements of pointing and centering. In this paper, a novel alignment method is presented, which employs data processing of the two-dimensional power spectral density (2D-PSD) for a near-field image and resolves the beam pointing error relative to the spatial filter pinhole directly. Combining this with a near-field fiducial mark, the operation of beam alignment is achieved. It is experimentally demonstrated that this scheme realizes a far-field alignment precision of approximately 3% of the pinhole size. This scheme adopts only one near-field camera to construct the alignment system, which provides a simple, efficient, and low-cost way to align lasers.
Time-frequency filtering and synthesis from convex projections
NASA Astrophysics Data System (ADS)
White, Langford B.
1990-11-01
This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.
Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2013-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2012-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Software-defined microwave photonic filter with high reconfigurable resolution
Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng
2016-01-01
Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability. PMID:27759062
Spacecraft attitude determination using a second-order nonlinear filter
NASA Technical Reports Server (NTRS)
Vathsal, S.
1987-01-01
The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.
Software-defined microwave photonic filter with high reconfigurable resolution.
Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng
2016-10-19
Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability.
Sim, Kyoung Mi; Park, Hyun-Seol; Bae, Gwi-Nam; Jung, Jae Hee
2015-11-15
In this study, we demonstrated an antimicrobial nanoparticle-coated electrostatic (ES) air filter. Antimicrobial natural-product Sophora flavescens nanoparticles were produced using an aerosol process, and were continuously deposited onto the surface of air filter media. For the electrostatic activation of the filter medium, a corona discharge electrification system was used before and after antimicrobial treatment of the filter. In the antimicrobial treatment process, the deposition efficiency of S. flavescens nanoparticles on the ES filter was ~12% higher than that on the pristine (Non-ES) filter. In the evaluation of filtration performance using test particles (a nanosized KCl aerosol and submicron-sized Staphylococcus epidermidis bioaerosol), the ES filter showed better filtration efficiency than the Non-ES filter. However, antimicrobial treatment with S. flavescens nanoparticles affected the filtration efficiency of the filter differently depending on the size of the test particles. While the filtration efficiency of the KCl nanoparticles was reduced on the ES filter after the antimicrobial treatment, the filtration efficiency was improved after the recharging process. In summary, we prepared an antimicrobial ES air filter with >99% antimicrobial activity, ~92.5% filtration efficiency (for a 300-nm KCl aerosol), and a ~0.8 mmAq pressure drop (at 13 cm/s). This study provides valuable information for the development of a hybrid air purification system that can serve various functions and be used in an indoor environment. Copyright © 2015 Elsevier B.V. All rights reserved.
Processing and comparison of two weighing lysimeters at the Rietholzbach catchment
NASA Astrophysics Data System (ADS)
Ruth, Conall; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.
2017-04-01
Weighing lysimeters are a well-established means of accurately obtaining local-scale estimates of actual evapotranspiration and seepage within soils. Current state-of-the-art devices have very high temporal resolutions and weighing precisions, and can also be used to estimate precipitation. These, however, require complex filtering to first remove noise (e.g. resulting from wind influence) from the mass measurements. At the Rietholzbach research catchment in northeastern Switzerland, two weighing lysimeters are in operation. One is a recently-installed state-of-the-art mini-lysimeter with a pump-controlled lower boundary; the other is a large free-drainage lysimeter in operation since 1976. To determine the optimal processing approach for the mini-lysimeter, a number of reported approaches were applied, with the resulting evapotranspiration and precipitation records being compared to those of the large lysimeter and a tipping bucket, respectively. Out of those examined, we found the Adaptive-Window and Adaptive-Threshold (AWAT) filter and a similar, non-adaptive approach, to perform best. Using the AWAT-filtered mini-lysimeter data as a reference, additional, retrospectively-applicable processing steps for the large lysimeter were then investigated. Those found to be most beneficial were the application of a three-point (10-min) moving mean to the mass measurements, and the setting-to-zero of estimated evapotranspiration and condensation in hours with greater-than-zero reference tipping bucket precipitation recordings. A comparison of lysimeter mass increases associated with precipitation revealed that the large lysimeter experiences a previously unknown under-catch of 11.1% (for liquid precipitation). Daily seepage measurements were found to be generally greater from the mini-lysimeter, probably reflecting the reduced input of water to the large lysimeter due to this under-catch.
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Spencer, Richard G
2010-09-01
A type of "matched filter" (MF), used extensively in the processing of one-dimensional spectra, is defined by multiplication of a free-induction decay (FID) by a decaying exponential with the same time constant as that of the FID. This maximizes, in a sense to be defined, the signal-to-noise ratio (SNR) in the spectrum obtained after Fourier transformation. However, a different entity known also as the matched filter was introduced by van Vleck in the context of pulse detection in the 1940's and has become widely integrated into signal processing practice. These two types of matched filters appear to be quite distinct. In the NMR case, the "filter", that is, the exponential multiplication, is defined by the characteristics of, and applied to, a time domain signal in order to achieve improved SNR in the spectral domain. In signal processing, the filter is defined by the characteristics of a signal in the spectral domain, and applied in order to improve the SNR in the temporal (pulse) domain. We reconcile these two distinct implementations of the matched filter, demonstrating that the NMR "matched filter" is a special case of the matched filter more rigorously defined in the signal processing literature. In addition, two limitations in the use of the MF are highlighted. First, application of the MF distorts resonance ratios as defined by amplitudes, although not as defined by areas. Second, the MF maximizes SNR with respect to resonance amplitude, while intensities are often more appropriately defined by areas. Maximizing the SNR with respect to area requires a somewhat different approach to matched filtering.
Acousto-Optic Tunable Filter for Time-Domain Processing of Ultra-Short Optical Pulses,
The application of acousto - optic tunable filters for shaping of ultra-fast pulses in the time domain is analyzed and demonstrated. With the rapid...advance of acousto - optic tunable filter (AOTF) technology, the opportunity for sophisticated signal processing capabilities arises. AOTFs offer unique
Effects of antimicrobial treatment on fiberglass-acrylic filters.
Cecchini, C; Verdenelli, M C; Orpianesi, C; Dadea, G M; Cresci, A
2004-01-01
The aims of the present study were to: (i) analyse a group of antimicrobial agents and to select the most active against test microbial strains; (ii) test the effect of the antimicrobial treatment on air filters in order to reduce microbial colonization. Different kinds of antimicrobial agents were analysed to assess their compatibility with the production process of air filter media. The minimal inhibitory concentration for each antimicrobial agent was determined against a defined list of microbial strains, and an antimicrobial activity assay of filter prototypes was developed to determine the most active agent among the compatible antimicrobials. Then, the most active was chosen and added directly to the filter during the production process. The microbial colonization of treated and untreated filter media was assessed at different working times for different incubation times by stereomicroscope and scanning electron microscope analysis. Some of the antimicrobial agents analysed were more active against microbial test strains and compatible with the production process of the filter media. Filter sections analysis of treated filter media showed a significantly lower microbial colonization than those untreated, a reduction of species both in density and varieties and of the presence of bacteria and fungal hyphae with reproductive structures. This study demonstrated the ability of antimicrobial treatments to inhibit the growth of micro-organisms in filter media and subsequently to increase indoor air quality (IAQ), highlighting the value of adding antimicrobials to filter media. To make a contribution to solving the problem of microbial contamination of air filters, by demonstrating the efficacy of incorporating antimicrobial agents in the filter media to improve IAQ and health.
Switching non-local vector median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2016-04-01
This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
The Ability to Process Abstract Information.
1983-09-01
Responses Associated with Stress . .. 8 2. Filter Theories: A. Broadbent’s filter model . . . . 12 B. Treisaman’s attentuation model . . . 12 3... model has been proposed by Schneider and Shiffrin (1977) and Shiffrin and Schneider (1977). Unlike Broadbent’s filter models Schneider and Shiffrin...allows for processing to take place only on the input "selected". This filter model is shown in Figure 2A. According to this theory, any information
Error due to unresolved scales in estimation problems for atmospheric data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana
The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.
Wave-filter-based approach for generation of a quiet space in a rectangular cavity
NASA Astrophysics Data System (ADS)
Iwamoto, Hiroyuki; Tanaka, Nobuo; Sanada, Akira
2018-02-01
This paper is concerned with the generation of a quiet space in a rectangular cavity using active wave control methodology. It is the purpose of this paper to present the wave filtering method for a rectangular cavity using multiple microphones and its application to an adaptive feedforward control system. Firstly, the transfer matrix method is introduced for describing the wave dynamics of the sound field, and then feedforward control laws for eliminating transmitted waves is derived. Furthermore, some numerical simulations are conducted that show the best possible result of active wave control. This is followed by the derivation of the wave filtering equations that indicates the structure of the wave filter. It is clarified that the wave filter consists of three portions; modal group filter, rearrangement filter and wave decomposition filter. Next, from a numerical point of view, the accuracy of the wave decomposition filter which is expressed as a function of frequency is investigated using condition numbers. Finally, an experiment on the adaptive feedforward control system using the wave filter is carried out, demonstrating that a quiet space is generated in the target space by the proposed method.
Methodology for Modeling the Microbial Contamination of Air Filters
Joe, Yun Haeng; Yoon, Ki Young; Hwang, Jungho
2014-01-01
In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter. PMID:24523908
Analysis of ICESat Data Using Kalman Filter and Kriging to Study Height Changes in East Antarctica
NASA Technical Reports Server (NTRS)
Herring, Thomas A.
2005-01-01
We analyze ICESat derived heights collected between Feb. 03-Nov. 04 using a kriging/Kalman filtering approach to investigate height changes in East Antarctica. The model's parameters are height change to an a priori static digital height model, seasonal signal expressed as an amplitude Beta and phase Theta, and height-change rate dh/dt for each (100 km)(exp 2) block. From the Kalman filter results, dh/dt has a mean of -0.06 m/yr in the flat interior of East Antarctica. Spatially correlated pointing errors in the current data releases give uncertainties in the range 0.06 m/yr, making height change detection unreliable at this time. Our test shows that when using all available data with pointing knowledge equivalent to that of Laser 2a, height change detection with an accuracy level 0.02 m/yr can be achieved over flat terrains in East Antarctica.
Design of distributed FBG vibration measuring system based on Fabry-Perot tunable filter
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Miao, Changyun; Li, Hongqiang; Gao, Hua; Gan, Jingmeng
2011-11-01
A distributed optical fiber grating wavelength interrogator based on fiber Fabry Perot tunable filter(FFP-TF) was proposed, which could measure dynamic strain or vibration of multi-sensing fiber gratings in one optical fiber by time division way. The wavelength demodulated mathematical model was built, the formulas of system output voltage and sensitivity were deduced and the method of finding static operating point was determined. The wavelength drifting characteristic of FFP-TF was discussed when the center wavelength of FFP-TF was set on the static operating point. A wavelength locking method was proposed by introducing a high-frequency driving voltage signal. A demodulated system was established based on Labview and its demodulated wavelength dynamic range is 290pm in theory. In experiment, by digital filtering applied to the system output data, 100Hz and 250Hz vibration signals were measured. The experiment results proved the feasibility of the demodulated method.
An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.
2004-01-01
This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Laboratory-scale integrated ARP filter test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Burket, P.
2016-03-01
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). Recently, the low filter flux through the ARP of approximately 5 gallons per minute has limited the rate at which radioactive liquid waste can be treated. Salt Batch 6 had a lower processing rate and required frequent filter cleaning. There is a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. This task attempted to simulate the entire ARP process, including multiple batches (5), washing, chemical cleaning, andmore » blending the feed with heels and recycle streams. The objective of the tests was to determine whether one of these processes is causing excessive fouling of the crossflow or secondary filter. The authors conducted the tests with feed solutions containing 6.6 M sodium Salt Batch 6 simulant supernate with no MST.« less
Using quantum filters to process images of diffuse axonal injury
NASA Astrophysics Data System (ADS)
Pineda Osorio, Mateo
2014-06-01
Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.
Oil Bypass Filter Technology Evaluation - Third Quarterly Report, April--June 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence R. Zirker; James E. Francfort
2003-08-01
This Third Quarterly report details the ongoing fleet evaluation of an oil bypass filter technology by the Idaho National Engineering and Environmental Laboratory (INEEL) for the U.S. Department of Energy’s FreedomCAR & Vehicle Technologies Program. Eight full-size, four-cycle diesel-engine buses used to transport INEEL employees on various routes have been equipped with oil bypass filter systems from the PuraDYN Corporation. The reported engine lubricating oil-filtering capability (down to 0.1 microns) and additive package of the bypass filter system is intended to extend oil-drain intervals. To validate the extended oil-drain intervals, an oil-analysis regime monitors the presence of necessary additives inmore » the oil, detects undesirable contaminants and engine wear metals, and evaluates the fitness of the oil for continued service. The eight buses have accumulated 185,000 miles to date without any oil changes. The preliminary economic analysis suggests that the per bus payback point for the oil bypass filter technology should be between 108,000 miles when 74 gallons of oil use is avoided and 168,000 miles when 118 gallons of oil use is avoided. As discussed in the report, the variation in the payback point is dependant on the assumed cost of oil. In anticipation of also evaluating oil bypass systems on six Chevrolet Tahoe sport utility vehicles, the oil is being sampled on the six Tahoes to develop an oil characterization history for each engine.« less
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.
NASA Astrophysics Data System (ADS)
Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan
2018-01-01
This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
Performance and Challenges of Point of Use Devices for Lead ...
this presentation summarizes the performance of POU devices for the removal of lead and some other metals, in Flint, Michigan. The mechanism of POU filters for metal removal is described as being a combination of physical filtration with surface sorption and adherence to embedded functional groups in the carbon block, along with the certification process and how to find certified products from the web listings. Finally, there is a discussion of several alternative approaches for possible improvement of the NSF/ANSI 53 and 42 standards to improve the amount of protection afforded by the devices, in the future. this presentation summarizes the performance of POU devices for the removal of lead and some other metals, in Flint, Michigan. The mechanism of POU filters for metal removal is described as being a combination of physical filtration with surface sorption and adherence to embedded functional groups in the carbon block, along with the certification process and how to find certified products from the web listings. Finally, there is a discussion of several alternative approaches for possible improvement of the NSF/ANSI 53 and 42 standards to improve the amount of protection afforded by the devices, in the future.
Apparatus and process for microbial detection and enumeration
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Grana, D. (Inventor)
1982-01-01
An apparatus and process for detecting and enumerating specific microorganisms from large volume samples containing small numbers of the microorganisms is presented. The large volume samples are filtered through a membrane filter to concentrate the microorganisms. The filter is positioned between two absorbent pads and previously moistened with a growth medium for the microorganisms. A pair of electrodes are disposed against the filter and the pad electrode filter assembly is retained within a petri dish by retainer ring. The cover is positioned on base of petri dish and sealed at the edges by a parafilm seal prior to being electrically connected via connectors to a strip chart recorder for detecting and enumerating the microorganisms collected on filter.
On pads and filters: Processing strong-motion data
Boore, D.M.
2005-01-01
Processing of strong-motion data in many cases can be as straightforward as filtering the acceleration time series and integrating to obtain velocity and displacement. To avoid the introduction of spurious low-frequency noise in quantities derived from the filtered accelerations, however, care must be taken to append zero pads of adequate length to the beginning and end of the segment of recorded data. These padded sections of the filtered acceleration need to be retained when deriving velocities, displacements, Fourier spectra, and response spectra. In addition, these padded and filtered sections should also be included in the time series used in the dynamic analysis of structures and soils to ensure compatibility with the filtered accelerations.
The role of aluminum in slow sand filtration.
Weber-Shirk, Monroe L; Chan, Kwok Loon
2007-03-01
Engineering enhancement of slow sand filtration has been an enigma in large part because the mechanisms responsible for particle removal have not been well characterized. The presumed role of biological processes in the filter ripening process nearly precluded the possibility of enhancing filter performance since interventions to enhance biological activity would have required decreasing the quality of the influent water. In previous work, we documented that an acid soluble polymer controls filter performance. The new understanding that particle removal is controlled in large part by physical chemical mechanisms has expanded the possibilities of engineering slow sand filter performance. Herein, we explore the role of naturally occurring aluminum as a ripening agent for slow sand filters and the possibility of using a low dose of alum to improve filter performance or to ripen slow sand filters.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
EFFECT OF LOADING DUST TYPE ON THE FILTRATION EFFICIENCY OF ELECTROSTATICALLY CHARGED FILTERS
The paper gives results of an evaluation of the effect of loading dust type on the filtration efficiency of electrostatically charged filters. Three types of filters were evaluated: a rigid-cell filter charged using an electrodynamic spinning process, a pleated-panel filter cha...
VizieR Online Data Catalog: M33 GALEX catalogue of UV point sources (Mudd+, 2015)
NASA Astrophysics Data System (ADS)
Mudd, D.; Stanek, K. Z.
2015-11-01
This catalogue was made using the Ultraviolet Imaging Telescope (UIT), an instrument aboard the Astro-1 Mission. UIT used photographic plates with the B1 and A1 filters roughly corresponding to the FUV and NUV filters of GALEX, having central wavelengths of ~1500 and 2400Å, respectively. It should be noted, however, that the A1 filter is significantly broader than the NUV filter on GALEX, reaching several hundred angstroms to the red end of its GALEX counterpart. The field of view of UIT is also circular but has a smaller radius of 18 arcmin The FWHM of UIT is comparable to that of GALEX, at 4 and 5.2 arcsec in the NUV and FUV filters, respectively. (3 data files).
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Electronic filters, hearing aids and methods
NASA Technical Reports Server (NTRS)
Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor); Zheng, Baohua (Inventor)
1991-01-01
An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a filtered signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the filtered signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems, and methods of operating them are also disclosed.
Shah, Kamal G; Singh, Vidhi; Kauffman, Peter C; Abe, Koji; Yager, Paul
2018-05-14
Paper-based diagnostic tests based on the lateral flow immunoassay concept promise low-cost, point-of-care detection of infectious diseases, but such assays suffer from poor limits of detection. One factor that contributes to poor analytical performance is a reliance on low-contrast chromophoric optical labels such as gold nanoparticles. Previous attempts to improve the sensitivity of paper-based diagnostics include replacing chromophoric labels with enzymes, fluorophores, or phosphors at the expense of increased fluidic complexity or the need for device readers with costly optoelectronics. Several groups, including our own, have proposed mobile phones as suitable point-of-care readers due to their low cost, ease of use, and ubiquity. However, extant mobile phone fluorescence readers require costly optical filters and were typically validated with only one camera sensor module, which is inappropriate for potential point-of-care use. In response, we propose to couple low-cost ultraviolet light-emitting diodes with long Stokes-shift quantum dots to enable ratiometric mobile phone fluorescence measurements without optical filters. Ratiometric imaging with unmodified smartphone cameras improves the contrast and attenuates the impact of excitation intensity variability by 15×. Practical application was shown with a lateral flow immunoassay for influenza A with nucleoproteins spiked into simulated nasal matrix. Limits of detection of 1.5 and 2.6 fmol were attained on two mobile phones, which are comparable to a gel imager (1.9 fmol), 10× better than imaging gold nanoparticles on a scanner (18 fmol), and >2 orders of magnitude better than gold nanoparticle-labeled assays imaged with mobile phones. Use of the proposed filter-free mobile phone imaging scheme is a first step toward enabling a new generation of highly sensitive, point-of-care fluorescence assays.
Suppression of biodynamic interference in head-tracked teleoperation
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.; Grunwald, A. J.; Tucker, G. E.; Tischler, M. B.
1991-01-01
The utility of helmet-tracked sights to provide pointing commands for teleoperation of cameras, lasers, or antennas in aircraft is degraded by the presence of uncommanded, involuntary heat motion, referred to as biodynamic interference. This interference limits the achievable precision required in pointing tasks. The noise contributions due to biodynamic interference consists of an additive component which is correlated with aircraft vibration and an uncorrelated, nonadditive component, referred to as remnant. An experimental simulation study is described which investigated the improvements achievable in pointing and tracking precision using dynamic display shifting in the helmet-mounted display. The experiment was conducted in a six degree of freedom motion base simulator with an emulated helmet-mounted display. Highly experienced pilot subjects performed precision head-pointing tasks while manually flying a visual flight-path tracking task. Four schemes using adaptive and low-pass filtering of the head motion were evaluated to determine their effects on task performance and pilot workload in the presence of whole-body vibration characteristic of helicopter flight. The results indicate that, for tracking tasks involving continuously moving targets, improvements of up to 70 percent can be achieved in percent on-target dwelling time and of up to 35 percent in rms tracking error, with the adaptive plus low-pass filter configuration. The results with the same filter configuration for the task of capturing randomly-positioned, stationary targets show an increase of up to 340 percent in the number of targets captured and an improvement of up to 24 percent in the average capture time. The adaptive plus low-pass filter combination was considered to exhibit the best overall display dynamics by each of the subjects.
EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data
NASA Astrophysics Data System (ADS)
Clements, Oliver; Walker, Peter
2017-04-01
The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The extracted data is returned as structured data (for instance a Python array) which can then be passed directly to local processing code. We will highlight how the libraries can be used by the community and integrated into existing systems, for instance by the use of Jupyter notebooks to share Python code examples which can then be used by other researchers as a basis for their own work.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Optical implementation of the synthetic discriminant function
NASA Astrophysics Data System (ADS)
Butler, S.; Riggins, J.
1984-10-01
Much attention is focused on the use of coherent optical pattern recognition (OPR) using matched spatial filters for robotics and intelligent systems. The OPR problem consists of three aspects -- information input, information processing, and information output. This paper discusses the information processing aspect which consists of choosing a filter to provide robust correlation with high efficiency. The filter should ideally be invariant to image shift, rotation and scale, provide a reasonable signal-to-noise (S/N) ratio and allow high throughput efficiency. The physical implementation of a spatial matched filter involves many choices. These include the use of conventional holograms or computer-generated holograms (CGH) and utilizing absorption or phase materials. Conventional holograms inherently modify the reference image by non-uniform emphasis of spatial frequencies. Proper use of film nonlinearity provides improved filter performance by emphasizing frequency ranges crucial to target discrimination. In the case of a CGH, the emphasis of the reference magnitude and phase can be controlled independently of the continuous tone or binary writing processes. This paper describes computer simulation and optical implementation of a geometrical shape and a Synthetic Discriminant Function (SDF) matched filter. The authors chose the binary Allebach-Keegan (AK) CGH algorithm to produce actual filters. The performances of these filters were measured to verify the simulation results. This paper provides a brief summary of the matched filter theory, the SDF, CGH algorithms, Phase-Only-Filtering, simulation procedures, and results.
In-line Kevlar filters for microfiltration of transuranic-containing liquid streams.
Gonzales, G J; Beddingfield, D H; Lieberman, J L; Curtis, J M; Ficklin, A C
1992-06-01
The Department of Energy Rocky Flats Plant has numerous ongoing efforts to minimize the generation of residue and waste and to improve safety and health. Spent polypropylene liquid filters held for plutonium recovery, known as "residue," or as transuranic mixed waste contribute to storage capacity problems and create radiation safety and health considerations. An in-line process-liquid filter made of Kevlar polymer fiber has been evaluated for its potential to: (1) minimize filter residue, (2) recover economically viable quantities of plutonium, (3) minimize liquid storage tank and process-stream radioactivity, and (4) reduce potential personnel radiation exposure associated with these sources. Kevlar filters were rated to less than or equal to 1 mu nominal filtration and are capable of reducing undissolved plutonium particles to more than 10 times below the economic discard limit, however produced high back-pressures and are not yet acid resistant. Kevlar filters performed independent of loaded particles serving as a sieve. Polypropylene filters removed molybdenum particles at efficiencies equal to Kevlar filters only after loading molybdenum during recirculation events. Kevlars' high-efficiency microfiltration of process-liquid streams for the removal of actinides has the potential to reduce personnel radiation exposure by a factor of 6 or greater, while simultaneously achieving a reduction in the generation of filter residue and waste by a factor of 7. Insoluble plutonium may be recoverable from Kevlar filters by incineration.
Streambank Erosion from Grazed Pastures, Grass Filters and Forest Buffers Over a Six-Year Period
USDA-ARS?s Scientific Manuscript database
In agricultural landscapes, streambank erosion, as a source of non-point water pollution, is one of the major contributors to stream habitat degradation. Streambank erosion rates from riparian forest buffers, grass filters and grazed pastures (stocking rates ranged from 0.23 to 1.15 cow-days ha-1 m-...
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew; Fraenkel, E. D.; van den Berg, Ad M.
2013-10-01
We present the FPGA/NIOS implementation of an adaptive finite impulse response (FIR) filter based on linear prediction to suppress radio frequency interference (RFI). This technique will be used for experiments that observe coherent radio emission from extensive air showers induced by ultra-high-energy cosmic rays. These experiments are designed to make a detailed study of the development of the electromagnetic part of air showers. Therefore, these radio signals provide information that is complementary to that obtained by water-Cherenkov detectors which are predominantly sensitive to the particle content of an air shower at ground. The radio signals from air showers are caused by the coherent emission due to geomagnetic and charge-excess processes. These emissions can be observed in the frequency band between 10-100 MHz. However, this frequency range is significantly contaminated by narrow-band RFI and other human-made distortions. A FIR filter implemented in the FPGA logic segment of the front-end electronics of a radio sensor significantly improves the signal-to-noise ratio. In this paper we discuss an adaptive filter which is based on linear prediction. The coefficients for the linear predictor (LP) are dynamically refreshed and calculated in the embedded NIOS processor, which is implemented in the same FPGA chip. The Levinson recursion, used to obtain the filter coefficients, is also implemented in the NIOS and is partially supported by direct multiplication in the DSP blocks of the logic FPGA segment. Tests confirm that the LP can be an alternative to other methods involving multiple time-to-frequency domain conversions using an FFT procedure. These multiple conversions draw heavily on the power consumption of the FPGA and are avoided by the linear prediction approach. Minimization of the power consumption is an important issue because the final system will be powered by solar panels. The FIR filter has been successfully tested in the Altera development kits with the EP4CE115F29C7 from the Cyclone IV family and the EP3C120F780C7 from the Cyclone III family at a 170 MHz sampling rate, a 12-bit I/O resolution, and an internal 30-bit dynamic range. Most of the slow floating-point NIOS calculations have been moved to the FPGA logic segments as extended fixed-point operations, which significantly reduced the refreshing time of the coefficients used in the LP. We conclude that the LP is a viable alternative to other methods such as non-adaptive methods involving digital notch filters or multiple time-to-frequency domain conversions using an FFT procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Burket, P.
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). Recently, the low filter flux through the ARP of approximately 5 gallons per minute has limited the rate at which radioactive liquid waste can be treated. Salt Batch 6 had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. SRR requested SRNL to conduct bench-scale filter tests to evaluate whether sodium oxalate, sodiummore » aluminosilicate, or aluminum solids (i.e., gibbsite and boehmite) could be the cause of excessive fouling of the crossflow or secondary filter at ARP. The authors conducted the tests by preparing slurries containing 6.6 M sodium Salt Batch 6 supernate, 2.5 g MST/L slurry, and varying concentrations of sodium oxalate, sodium aluminosilicate, and aluminum solids, processing the slurry through a bench-scale filter unit that contains a crossflow primary filter and a dead-end secondary filter, and measuring filter flux and transmembrane pressure as a function of time. Among the conclusions drwn from this work are the following: (1) All of the tests showed some evidence of fouling the secondary filter. This fouling could be from fine particles passing through the crossflow filter. (2) The sodium oxalate-containing feeds behaved differently from the sodium aluminosilicate- and gibbsite/boehmite-containing feeds.« less
An optimal filter for short photoplethysmogram signals
Liang, Yongbo; Elgendi, Mohamed; Chen, Zhencheng; Ward, Rabab
2018-01-01
A photoplethysmogram (PPG) contains a wealth of cardiovascular system information, and with the development of wearable technology, it has become the basic technique for evaluating cardiovascular health and detecting diseases. However, due to the varying environments in which wearable devices are used and, consequently, their varying susceptibility to noise interference, effective processing of PPG signals is challenging. Thus, the aim of this study was to determine the optimal filter and filter order to be used for PPG signal processing to make the systolic and diastolic waves more salient in the filtered PPG signal using the skewness quality index. Nine types of filters with 10 different orders were used to filter 219 (2.1s) short PPG signals. The signals were divided into three categories by PPG experts according to their noise levels: excellent, acceptable, or unfit. Results show that the Chebyshev II filter can improve the PPG signal quality more effectively than other types of filters and that the optimal order for the Chebyshev II filter is the 4th order. PMID:29714722
Complex noise suppression using a sparse representation and 3D filtering of images
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.
2017-08-01
A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Mandlburger, Gottfried; Klimczyk, Agata
2013-04-01
The paper presents an evaluation of different terrain point extraction algorithms for Airborne Laser Scanning (ALS) point clouds. The research area covers eight test sites in the Małopolska Province (Poland) with varying point density between 3-15points/m² and surface as well as land cover characteristics. In this paper the existing implementations of algorithms were considered. Approaches based on mathematical morphology, progressive densification, robust surface interpolation and segmentation were compared. From the group of morphological filters, the Progressive Morphological Filter (PMF) proposed by Zhang K. et al. (2003) in LIS software was evaluated. From the progressive densification filter methods developed by Axelsson P. (2000) the Martin Isenburg's implementation in LAStools software (LAStools, 2012) was chosen. The third group of methods are surface-based filters. In this study, we used the hierarchic robust interpolation approach by Kraus K., Pfeifer N. (1998) as implemented in SCOP++ (Trimble, 2012). The fourth group of methods works on segmentation. From this filtering concept the segmentation algorithm available in LIS was tested (Wichmann V., 2012). The main aim in executing the automatic classification for ground extraction was operating in default mode or with default parameters which were selected by the developers of the algorithms. It was assumed that the default settings were equivalent to the parameters on which the best results can be achieved. In case it was not possible to apply an algorithm in default mode, a combination of the available and most crucial parameters for ground extraction were selected. As a result of these analyses, several output LAS files with different ground classification were achieved. The results were described on the basis of qualitative and quantitative analyses, both being in a formal description. The classification differences were verified on point cloud data. Qualitative verification of ground extraction was made on the basis of a visual inspection of the results (Sithole G., Vosselman G., 2004; Meng X. et al., 2010). The results of these analyses were described as a graph using weighted assumption. The quantitative analyses were evaluated on a basis of Type I, Type II and Total errors (Sithole G., Vosselman G., 2003). The achieved results show that the analysed algorithms yield different classification accuracies depending on the landscape and land cover. The simplest terrain for ground extraction was flat rural area with sparse vegetation. The most difficult were mountainous areas with very dense vegetation where only a few ground points were available. Generally the LAStools algorithm gives good results in every type of terrain, but the ground surface is too smooth. The LIS Progressive Morphological Filter algorithm gives good results in forested flat and low slope areas. The surface-based algorithm from SCOP++ gives good results in mountainous areas - both forested and built-up because it better preserves steep slopes, sharp ridges and breaklines, but sometimes it fails to remove off-terrain objects from the ground class. The segmentation-based algorithm in LIS gives quite good results in built-up flat areas, but in forested areas it does not work well. Bibliography: Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIII (Pt. B4/1), 110- 117 Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS Journal of Photogrammetry & Remote Sensing 53 (4), 193-203 LAStools website http://www.cs.unc.edu/~isenburg/lastools/ (verified in September 2012) Meng, X., Currit, N., Zhao, K., 2010. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sensing 2, 833-860 Sithole, G., Vosselman, G., 2003. Report: ISPRS Comparison of Filters. Commission III, Working Group 3. Department of Geodesy, Faculty of Civil Engineering and Geosciences, Delft University of technology, The Netherlands Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for bare-Earth extraction form airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85-101 Trimble, 2012 http://www.trimble.com/geospatial/aerial-software.aspx (verified in November 2012) Wichmann, V., 2012. LIS Command Reference, LASERDATA GmbH, 1-231 Zhang, K., Chen, S.-C., Whitman, D., Shyu, M.-L., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing non-ground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing, 41(4), 872-882
Discrimination of Nosiheptide Sources with Plasmonic Filters.
Wang, Delong; Ni, Haibin; Wang, Zhongqiang; Liu, Bing; Chen, Hongyuan; Gu, Zhongze; Zhao, Xiangwei
2017-04-19
Bacteria identification plays a vital role in the field of clinical diagnosis, food industry, and environmental monitoring, which is in great demand of point of care detection methods. In this paper, in order to discriminate the source of nosiheptide product, a plasmonic filter was fabricated to filtrate, capture and identify Streptomycete spores with Surface enhanced Raman Scattering (SERS). Since the plasmonic filter was derived from self-assembled photonic crystal coated with silver, the plasmonic "hot spots" on the filter surface was distributed evenly in a fare good density and the SERS enhancement factor was 7.49 × 10 7 . With this filter, a stain- and PCR-free detection was realized with only 5 μL sample solution and 5 min in a manner of "filtration and measure". Comparison to traditional Gram stain method and silver-plated nylon filter membrane, the plasmonic filter showed good sensitivity and efficiency in the discrimination of nosiheptide prepared with chemical and biological methods. It is anticipated that this simple SERS detection method with plasmonic filter has promising potentials in food safety, environmental, or clinical applications.
[Examination of patient dose reduction in cardiovasucular X-ray systems with a metal filter].
Yasuda, Mitsuyoshi; Kato, Kyouichi; Tanabe, Nobuaki; Sakiyama, Koushi; Uchiyama, Yushi; Suzuki, Yoshiaki; Suzuki, Hiroshi; Nakazawa, Yasuo
2012-01-01
In interventional X-ray for cardiology of flat panel digital detector (FPD), the phenomenon that exposure dose was suddenly increased when a subject thickness was thickened was recognized. At that time, variable metal built-in filters in FPD were all off. Therefore, we examined whether dose reduction was possible without affecting a clinical image using metal filter (filter) which we have been conventionally using for dose reduction. About 45% dose reduction was achieved when we measured an exposure dose at 30 cm of acrylic thickness in the presence of a filter. In addition, we measured signal to noise ratio/contrast to noise ratio/a resolution limit by the visual evaluation, and there was no influence by filter usage. In the clinical examination, visual evaluation of image quality of coronary angiography (40 cases) using a 5-point evaluation scale by a physician was performed. As a result, filter usage did not influence the image quality (p=NS). Therefore, reduction of sudden increase of exposure dose was achieved without influencing an image quality by adding filter to FPD.
INTERIOR VIEW OF FILTER WHEEL MACHINE USED TO FILTER OUT ...
INTERIOR VIEW OF FILTER WHEEL MACHINE USED TO FILTER OUT AND SEPARATE BICARBONATE FROM AMMONIONATED BRINE. DISCHARGE FROM STRIPPER COLUMNS (SOLVAY COLUMNS). - Solvay Process Company, SA Wetside Building, Between Willis & Milton Avenue, Solvay, Onondaga County, NY
INTERIOR VIEW OF FILTER/DRYERS USED TO FILTER OUT AND SEPARATE ...
INTERIOR VIEW OF FILTER/DRYERS USED TO FILTER OUT AND SEPARATE BICARBONATE FROM AMMONIONATED BRINE. DISCHARGE FROM STRIPPER COLUMNS (SOLVAY COLUMNS). - Solvay Process Company, SA Wetside Building, Between Willis & Milton Avenue, Solvay, Onondaga County, NY
Sutton, J.B.; Torrey, J.V.P.
1958-08-26
A process is described for reconditioning fused alumina filters which have become clogged by the accretion of bismuth phosphate in the filter pores, The method consists in contacting such filters with faming sulfuric acid, and maintaining such contact for a substantial period of time.
NASA Astrophysics Data System (ADS)
Pearson, David
A linear accelerator manufactured by Elekta, equipped with a multi leaf collimation (MLC) system has been modelled using Monte Carlo simulations with the photon flattening filter removed. The purpose of this investigation was to show that more efficient and more accurate Intensity Modulated Radiation Therapy (IMRT) treatments can be delivered from a standard linear accelerator with the flattening filter removed from the beam. A range of simulations of 6 MV and 10 MV photon were studied and compared to a model of a standard accelerator which included the flattening filter for those beams. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. We show here that with the flattening filter removed, an increase to the dose on the central axis by a factor of 2.35 and 4.18 is achieved for 6 MV and 10 MV photon beams respectively using a standard 10x 10cm2 field size. A comparison of the dose at points at the field edges led to the result that, removal of the flattening filter reduced the dose at these points by approximately 10% for the 6 MV beam over the clinical range of field sizes. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than dmax. Also studied was the electron contamination brought about by the removal of the filter. To reduce this electron contamination and thus reduce the skin dose to the patient we consider the use of an electron scattering foil in the beam path. The electron scattering foil had very little effect on dmax. From simulations of a standard 6MV beam, a filter-free beam and a filter-free beam with electron scattering foil, we deduce that the proportion of electrons in the photon beam is 0.35%, 0.28% and 0.27%, consecutively. In short, higher dose rates will result in decreased treatment times and the reduced dose outside of the field is indicative of reducing the dose to the surrounding tissue. Electron contamination was found to be comparable with conventional IMRT treatments carried out with a flattening filter.
Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.
Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu
2014-10-01
Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.
Park, Younggeun; Ryu, Byunghoon; Oh, Bo-Ram; Song, Yujing; Liang, Xiaogan; Kurabayashi, Katsuo
2017-06-27
Monitoring of the time-varying immune status of a diseased host often requires rapid and sensitive detection of cytokines. Metallic nanoparticle-based localized surface plasmon resonance (LSPR) biosensors hold promise to meet this clinical need by permitting label-free detection of target biomolecules. These biosensors, however, continue to suffer from relatively low sensitivity as compared to conventional immunoassay methods that involve labeling processes. Their response speeds also need to be further improved to enable rapid cytokine quantification for critical care in a timely manner. In this paper, we report an immunobiosensing device integrating a biotunable nanoplasmonic optical filter and a highly sensitive few-layer molybdenum disulfide (MoS 2 ) photoconductive component, which can serve as a generic device platform to meet the need of rapid cytokine detection with high sensitivity. The nanoplasmonic filter consists of anticytokine antibody-conjugated gold nanoparticles on a SiO 2 thin layer that is placed 170 μm above a few-layer MoS 2 photoconductive flake device. The principle of the biosensor operation is based on tuning the delivery of incident light to the few-layer MoS 2 photoconductive flake thorough the nanoplasmonic filter by means of biomolecular surface binding-induced LSPR shifts. The tuning is dependent on cytokine concentration on the nanoplasmonic filter and optoelectronically detected by the few-layer MoS 2 device. Using the developed optoelectronic biosensor, we have demonstrated label-free detection of IL-1β, a pro-inflammatory cytokine, with a detection limit as low as 250 fg/mL (14 fM), a large dynamic range of 10 6 , and a short assay time of 10 min. The presented biosensing approach could be further developed and generalized for point-of-care diagnosis, wearable bio/chemical sensing, and environmental monitoring.
Automatic Data Filter Customization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Extracting spatial information from large aperture exposures of diffuse sources
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Moos, H. W.
1981-01-01
The spatial properties of large aperture exposures of diffuse emission can be used both to investigate spatial variations in the emission and to filter out camera noise in exposures of weak emission sources. Spatial imaging can be accomplished both parallel and perpendicular to dispersion with a resolution of 5-6 arc sec, and a narrow median filter running perpendicular to dispersion across a diffuse image selectively filters out point source features, such as reseaux marks and fast particle hits. Spatial information derived from observations of solar system objects is presented.
Analytically solvable chaotic oscillator based on a first-order filter.
Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N
2016-02-01
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.
Analytically solvable chaotic oscillator based on a first-order filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-02-15
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less
Application of velocity filtering to optical-flow passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1992-01-01
The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.
40 CFR 1065.390 - PM balance verifications and weighing process verification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... days before weighing any filter. (2) Zero and span the balance within 12 h before weighing any filter. (3) Verify that the mass determination of reference filters before and after a filter weighing... reference PM sample media (e.g., filters) before and after a weighing session. A weighing session may be as...
40 CFR 1065.390 - PM balance verifications and weighing process verification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... days before weighing any filter. (2) Zero and span the balance within 12 h before weighing any filter. (3) Verify that the mass determination of reference filters before and after a filter weighing... reference PM sample media (e.g., filters) before and after a weighing session. A weighing session may be as...
40 CFR 1065.390 - PM balance verifications and weighing process verification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... days before weighing any filter. (2) Zero and span the balance within 12 h before weighing any filter. (3) Verify that the mass determination of reference filters before and after a filter weighing... reference PM sample media (e.g., filters) before and after a weighing session. A weighing session may be as...
40 CFR 1065.390 - PM balance verifications and weighing process verification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... days before weighing any filter. (2) Zero and span the balance within 12 h before weighing any filter. (3) Verify that the mass determination of reference filters before and after a filter weighing... reference PM sample media (e.g., filters) before and after a weighing session. A weighing session may be as...
40 CFR 1065.390 - PM balance verifications and weighing process verification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... days before weighing any filter. (2) Zero and span the balance within 12 h before weighing any filter. (3) Verify that the mass determination of reference filters before and after a filter weighing... weighing session by weighing reference PM sample media (e.g., filters) before and after a weighing session...
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
NASA Astrophysics Data System (ADS)
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Lyu, Weiwei; Cheng, Xianghong
2017-11-28
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
Belavkin filter for mixture of quadrature and photon counting process with some control techniques
NASA Astrophysics Data System (ADS)
Garg, Naman; Parthasarathy, Harish; Upadhyay, D. K.
2018-03-01
The Belavkin filter for the H-P Schrödinger equation is derived when the measurement process consists of a mixture of quantum Brownian motions and conservation/Poisson process. Higher-order powers of the measurement noise differentials appear in the Belavkin dynamics. For simulation, we use a second-order truncation. Control of the Belavkin filtered state by infinitesimal unitary operators is achieved in order to reduce the noise effects in the Belavkin filter equation. This is carried out along the lines of Luc Bouten. Various optimization criteria for control are described like state tracking and Lindblad noise removal.
Particulate generation and control in the PREPP (Process Experimental Pilot Plant) incinerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stermer, D.L.; Gale, L.G.
1989-03-01
Particulate emissions in radioactive incineration systems using a wet scrubbing system are generally ultimately controlled by flowing the process offgas stream through a high-efficiency filter, such as a High Efficient Particulate Air (HEPA) filter. Because HEPA filters are capable of reducing particulate emissions over an order of magnitude below regulatory limits, they consequently are vulnerable to high loading rates. This becomes a serious handicap in radioactive systems when filter change-out is required at an unacceptably high rate. The Process Experimental Pilot Plant (PREPP) incineration system is designed for processing retrieved low level mixed hazardous waste. It has a wet offgasmore » treatment system consisting of a Quencher, Venturi Scrubber, Entrainment Eliminator, Mist Eliminator, two stages of HEPA filters, and induced draft fans. During previous tests, it was noted that the offgas filters loaded with particulate at a rate requiring replacement as often as every four hours. During 1988, PREPP conducted a series of tests which included an investigation of the causes of heavy particulate accumulation on the offgas filters in relation to various operating parameters. This was done by measuring the particulate concentrations in the offgas system, primarily as a function of scrub solution salt concentration, waste feed rate, and offgas flow rate. 2 figs., 9 tabs.« less
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.