An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
System and Apparatus for Filtering Particles
NASA Technical Reports Server (NTRS)
Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)
2015-01-01
A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.
Dalla Vestra, Michele; Grolla, Elisabetta; Bonanni, Luca; Pesavento, Raffaele
2018-03-01
The use of inferior vena cava filters to prevent pulmonary embolism is increasing mainly because of indications that appear to be unclearly codified and recommended. The evidence supporting this approach is often heterogeneous, and mainly based on observational studies and consensus opinions, while the insertion of an IVC filter exposes patients to the risk of complications and increases health care costs. Thus, several proposed indications for an IVC filter placement remain controversial. We attempt to review the proof on the efficacy and safety of IVC filters in several "special" clinical settings, and assess the robustness of the available evidence for any specific indication to place an IVC filter.
Nordgaard, Håvard B; Vitale, Nicola; Astudillo, Rafael; Renzulli, Attilio; Romundstad, Pål; Haaverstad, Rune
2010-05-01
Transit-time flow measurement is widely accepted as an intra-operative assessment in coronary artery bypass grafting (CABG). However, the two most commonly applied flowmeters, manufactured by MediStim ASA and Transonic Inc., have different default filter settings of 20 and 10 Hz, respectively. This may cause different flow measurements, which will influence the reported results. The aim was to compare pulsatility index (PI) values recorded by the MediStim and Transonic flowmeters in two different clinical settings: (1) analysis of the flow patterns recorded simultaneously by both flowmeters in the same CABGs; and (2) evaluation of flow patterns under different levels of filter settings in the same grafts. Graft flow and PI were measured using the two different flowmeters simultaneously in 19 bypass grafts. Finally, eight grafts were assessed under different digital filter settings at 5, 10, 20, 30, 50 and 100 Hz. The Transonic flowmeter provided substantially lower PI as compared with the MediStim flowmeter. By increasing the filter setting in the flowmeter, PI increased considerably. The Transonic flowmeter displayed a lower PI than the MediStim, due to a lower filter setting. In the Transonic,flow signals are filtered at a lower level, rendering a 'smoother' pattern of flow curves. Because different filter settings determine different PIs, caution must be taken when flow values and flowmeters are compared. The type of flowmeter should be indicated whenever graft flow measurements and derived indexes are provided [corrected]. Copyright 2009 European Association for Cardio-Thoracic Surgery. All rights reserved.
Li, Sui-Xian
2018-05-07
Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.
High-energy mode-locked fiber lasers using multiple transmission filters and a genetic algorithm.
Fu, Xing; Kutz, J Nathan
2013-03-11
We theoretically demonstrate that in a laser cavity mode-locked by nonlinear polarization rotation (NPR) using sets of waveplates and passive polarizer, the energy performance can be significantly increased by incorporating multiple NPR filters. The NPR filters are engineered so as to mitigate the multi-pulsing instability in the laser cavity which is responsible for limiting the single pulse per round trip energy in a myriad of mode-locked cavities. Engineering of the NPR filters for performance is accomplished by implementing a genetic algorithm that is capable of systematically identifying viable and optimal NPR settings in a vast parameter space. Our study shows that five NPR filters can increase the cavity energy by approximately a factor of five, with additional NPRs contributing little or no enhancements beyond this. With the advent and demonstration of electronic controls for waveplates and polarizers, the analysis suggests a general design and engineering principle that can potentially close the order of magnitude energy gap between fiber based mode-locked lasers and their solid state counterparts.
Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliescu, Bogdan; Haskal, Ziv J., E-mail: ziv2@mac.com
Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful,more » with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.« less
PERFORMANCE IMPROVEMENT OF CROSS-FLOW FILTRATION FOR HIGH LEVEL WASTE TREATMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, M.; Nash, C.; Poirier, M.
2011-01-12
In the interest of accelerating waste treatment processing, the DOE has funded studies to better understand filtration with the goal of improving filter fluxes in existing cross-flow equipment. The Savannah River National Laboratory (SRNL) was included in those studies, with a focus on start-up techniques, filter cake development, the application of filter aids (cake forming solid precoats), and body feeds (flux enhancing polymers). This paper discusses the progress of those filter studies. Cross-flow filtration is a key process step in many operating and planned waste treatment facilities to separate undissolved solids from supernate slurries. This separation technology generally has themore » advantage of self-cleaning through the action of wall shear stress created by the flow of waste slurry through the filter tubes. However, the ability of filter wall self-cleaning depends on the slurry being filtered. Many of the alkaline radioactive wastes are extremely challenging to filtration, e.g., those containing compounds of aluminum and iron, which have particles whose size and morphology reduce permeability. Unfortunately, low filter flux can be a bottleneck in waste processing facilities such as the Savannah River Modular Caustic Side Solvent Extraction Unit and the Hanford Waste Treatment Plant. Any improvement to the filtration rate would lead directly to increased throughput of the entire process. To date increased rates are generally realized by either increasing the cross-flow filter axial flowrate, limited by pump capacity, or by increasing filter surface area, limited by space and increasing the required pump load. SRNL set up both dead-end and cross-flow filter tests to better understand filter performance based on filter media structure, flow conditions, filter cleaning, and several different types of filter aids and body feeds. Using non-radioactive simulated wastes, both chemically and physically similar to the actual radioactive wastes, the authors performed several tests to demonstrate increases in filter performance. With the proper use of filter flow conditions and filter enhancers, filter flow rates can be increased over rates currently realized today.« less
CT of inferior vena cava filters: normal presentations and potential complications.
Georgiou, Nicholas A; Katz, Douglas S; Ganson, George; Eng, Kaitlin; Hon, Man
2015-12-01
With massive pulmonary embolism (PE) being the first or second leading cause of unexpected death in adults, protection against PE is critical in appropriately selected patients. The use of inferior vena cava (IVC) filters has increased over the years, paralleling the increased detection of deep venous thrombosis (DVT) and PE by improved and more available imaging techniques. The use of IVC filters has become very common as an alternative and/or as a supplement to anticoagulation, and these filters are often seen on routine abdominal CT, including in the emergency setting; therefore, knowledge of the normal spectrum of findings of IVC filters by the radiologist on CT is critical. Additionally, CT can be used specifically to identify complications related to IVC filters, and CT may alternatively demonstrate IVC filter-related problems which are not specifically anticipated clinically. With multiple available IVC filters on the US market, and even more available outside of the USA, it is important for the emergency and the general radiologist to recognize the different models and various appearances and positioning on CT, as well as their potential complications. These complications may be related to venous access, but also include thrombosis related to the filter, filter migration and penetration, and problems associated with filter deployment. With the increasing number of inferior vena cava filters placed and their duration within patients increasing over time, it is critical for emergency and other radiologists to be aware of these findings on CT.
The long-term performance of electrically charged filters in a ventilation system.
Raynor, Peter C; Chae, Soo Jae
2004-07-01
The efficiency and pressure drop of filters made from polyolefin fibers carrying electrical charges were compared with efficiency and pressure drop for filters made from uncharged glass fibers to determine if the efficiency of the charged filters changed with use. Thirty glass fiber filters and 30 polyolefin fiber filters were placed in different, but nearly identical, air-handling units that supplied outside air to a large building. Using two kinds of real-time aerosol counting and sizing instruments, the efficiency of both sets of filters was measured repeatedly for more than 19 weeks while the air-handling units operated almost continuously. Pressure drop was recorded by the ventilation system's computer control. Measurements showed that the efficiency of the glass fiber filters remained almost constant with time. However, the charged polyolefin fiber filters exhibited large efficiency reductions with time before the efficiency began to increase again toward the end of the test. For particles 0.6 microm in diameter, the efficiency of the polyolefin fiber filters declined from 85% to 45% after 11 weeks before recovering to 65% at the end of the test. The pressure drops of the glass fiber filters increased by about 0.40 in. H2O, whereas the pressure drop of the polyolefin fiber filters increased by only 0.28 in. H2O. The results indicate that dust loading reduces the effectiveness of electrical charges on filter fibers. Copyright 2004 JOEH, LLC
NASA Technical Reports Server (NTRS)
Mooney, Thomas A.; Smajkiewicz, Ali
1991-01-01
A set of ten interference filters for the UV and VIS spectral region were flown on the surface of the Long Duration Exposure Facility (LDEF) Tray B-8 along with earth radiation budget (ERB) components from the Eppley Laboratory. Transmittance changes and other degradation observed after the return of the filters to Barr are reported. Substrates, coatings, and (where applicable) cement materials are identified. In general, all filters except those containing lead compounds survived well. Metal dielectric filters for the UV developed large numbers of pinholes which caused an increase in transmittance. Band shapes and spectral positioning, however, did not change.
Track-before-detect labeled multi-bernoulli particle filter with label switching
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, Angel F.
2016-10-01
This paper presents a multitarget tracking particle filter (PF) for general track-before-detect measurement models. The PF is presented in the random finite set framework and uses a labelled multi-Bernoulli approximation. We also present a label switching improvement algorithm based on Markov chain Monte Carlo that is expected to increase filter performance if targets get in close proximity for a sufficiently long time. The PF is tested in two challenging numerical examples.
Optical add/drop filter for wavelength division multiplexed systems
Deri, Robert J.; Strand, Oliver T.; Garrett, Henry E.
2002-01-01
An optical add/drop filter for wavelength division multiplexed systems and construction methods are disclosed. The add/drop filter includes a first ferrule having a first pre-formed opening for receiving a first optical fiber; an interference filter oriented to pass a first set of wavelengths along the first optical fiber and reflect a second set of wavelengths; and, a second ferrule having a second pre-formed opening for receiving the second optical fiber, and the reflected second set of wavelengths. A method for constructing the optical add/drop filter consists of the steps of forming a first set of openings in a first ferrule; inserting a first set of optical fibers into the first set of openings; forming a first set of guide pin openings in the first ferrule; dividing the first ferrule into a first ferrule portion and a second ferrule portion; forming an interference filter on the first ferrule portion; inserting guide pins through the first set of guide pin openings in the first ferrule portion and second ferrule portion to passively align the first set of optical fibers; removing material such that light reflected from the interference filter from the first set of optical fibers is accessible; forming a second set of openings in a second ferrule; inserting a second set of optical fibers into the second set of openings; and positioning the second ferrule with respect to the first ferrule such that the second set of optical fibers receive the light reflected from the interference filter.
Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying
2011-03-01
Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
Dalahmeh, Sahar S; Pell, Mikael; Hylander, Lars D; Lalander, Cecilia; Vinnerås, Björn; Jönsson, Håkan
2014-01-01
Greywater flows and concentrations vary greatly, thus evaluation and prediction of the response of on-site treatment filters to variable loading regimes is challenging. The performance of 0.6 m × 0.2 m (height × diameter) filters of bark, activated charcoal and sand in reduction of biochemical oxygen demand (BOD5), chemical oxygen demand (COD), total nitrogen (Tot-N) and total phosphorus (Tot-P) under variable loading regimes was investigated and modelled. During seven runs, the filters were fed with synthetic greywater at hydraulic loading rates (HLR) of 32-128 L m(-2) day(-1) and organic loading rates (OLR) of 13-76 g BOD5 m(-2) day(-1). Based on the changes in HLR and OLR, the reduction in pollutants was modelled using multiple linear regression. The models showed that increasing the HLR from 32 to 128 L m(-2) day(-1) decreased COD reduction in the bark filters from 74 to 40%, but increased COD reduction in the charcoal and sand filters from 76 to 90% and 65 to 83%, respectively. Moreover, the models showed that increasing the OLR from 13 to 76 g BOD5 m(-2) day(-1) enhanced the pollutant reduction in all filters except for Tot-P in the bark filters, which decreased slightly from 81 to 73%. Decreasing the HLR from 128 to 32 L m(-2) day(-1) enhanced the pollutant reduction in all filters, but decreasing the OLR from 76 to 14 g BOD5 m(-2) day(-1) detached biofilm and decreased the Tot-N and Tot-P reduction in the bark and sand filters. Overall, the bark filters had the capacity to treat high OLR, while the charcoal filters had the capacity to treat high HLR and high OLR. Both bark and charcoal filters had higher capacity than sand filters in dealing with high and variable loads. Bark seems to be an attractive substitute for sand filters in settings short in water and its effluent would be valuable for irrigation, while charcoal filters should be an attractive alternative for settings both rich and short in water supply and when environmental eutrophication has to be considered. Copyright © 2013 Elsevier Ltd. All rights reserved.
In-filter PCDF and PCDD formation at low temperature during MSWI combustion.
Weidemann, Eva; Marklund, Stellan; Bristav, Henrik; Lundin, Lisa
2014-05-01
This case study investigated PCDF and PCDD emissions from a 65 MW waste-to-energy plant to identify why an air pollution control system remodeling to accommodate increased production resulted in increased TEQ concentrations. Pre- and post-filter gases were collected simultaneously in four sample sets with varying filter temperatures and with/without activated carbon injection. Samples were analyzed to determine total PCDF and PCDD concentrations, as well as homologue profiles, and concentrations of individual congeners (some remained co-eluted). The total post filter PCDD concentrations where found to increase while the concentrations of PCDF and 2,3,7,8-substituted congeners declined. An investigation of the individual congener concentrations revealed that the increase of PCDD concentrations were due to a few congeners, suggesting a single formation route. The study also concludes that vital information about the formation could be obtained by not restricting the analysis to just the 2,3,7,8-substituted congeners. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Asmuin, Norzelawati; Pairan, M. Rasidi; Isa, Norasikin Mat; Sies, Farid
2017-04-01
Commercial kitchen hood ventilation system is a device used to capture and filtered the plumes from cooking activities in the kitchen area. Nowadays, it is very popular in the industrial sector such as restaurant and hotel to provide hygiene food. This study focused at the KSA filter part which installed in the kitchen hood system, the purpose of this study is to identify the critical region which indicated by observing the velocity and pressure of plumes exerted at of KSA filter. It is important to know the critical location of the KSA filter in order to install the nozzle which will helps increase the filtration effectiveness. The ANSYS 16.1 (FLUENT) software as a tool used to simulate the kitchen hood systems which consist of KSA filter. The commercial kitchen hood system model has a dimension 700 mm width, 1600 mm length and 555 mm height. The system has two inlets and one outlet. The velocity of the plumes is set to be 0.235m/s and the velocity of the inlet capture jet is set to be 1.078m/s. The KSA filter is placed 45 degree from the y axis. The result shows the plumes has more tendency flowing pass through at the bottom part of KSA filter.
Formaldehyde emissions from ventilation filters under different relative humidity conditions.
Sidheswaran, Meera; Chen, Wenhao; Chang, Agatha; Miller, Robert; Cohn, Sebastian; Sullivan, Douglas; Fisk, William J; Kumagai, Kazukiyo; Destaillats, Hugo
2013-05-21
Formaldehyde emissions from fiberglass and polyester filters used in building heating, ventilation, and air conditioning (HVAC) systems were measured in bench-scale tests using 10 and 17 cm(2) coupons over 24 to 720 h periods. Experiments were performed at room temperature and four different relative humidity settings (20, 50, 65, and 80% RH). Two different air flow velocities across the filters were explored: 0.013 and 0.5 m/s. Fiberglass filters emitted between 20 and 1000 times more formaldehyde than polyester filters under similar RH and airflow conditions. Emissions increased markedly with increasing humidity, up to 10 mg/h-m(2) at 80% RH. Formaldehyde emissions from fiberglass filters coated with tackifiers (impaction oils) were lower than those from uncoated fiberglass media, suggesting that hydrolysis of other polymeric constituents of the filter matrix, such as adhesives or binders was likely the main formaldehyde source. These laboratory results were further validated by performing a small field study in an unoccupied office. At 80% RH, indoor formaldehyde concentrations increased by 48-64%, from 9-12 μg/m(3) to 12-20 μg/m(3), when synthetic filters were replaced with fiberglass filtration media in the HVAC units. Better understanding of the reaction mechanisms and assessing their overall contributions to indoor formaldehyde levels will allow for efficient control of this pollution source.
Designing manufacturable filters for a 16-band plenoptic camera using differential evolution
NASA Astrophysics Data System (ADS)
Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert
2017-05-01
A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.
Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl
2018-06-26
The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of hosting data on hot storage, without compromising the scalability of downstream analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olszyk, D.M.; Takemoto, B.K.; Poe, M.
1991-01-01
Leaf responses were measured to test a hypothesis that reduced photosynthetic capacity and/or altered water relations were associated with reductions in yield for 'Valencia' orange trees (Citrus sinensis (L.), Osbeck) exposed to ambient oxidant air pollution. Exposures were continuous for 4 years to three levels of oxidants (in charcoal-filtered, half-filtered, and non-filtered air). Oxidants had no effect on net leaf photosynthetic rates or on photosynthetic pigment concentrations. A single set of measurements indicated that oxidants increased leaf starch concentrations (24%) prior to flowering, suggesting a change in photosynthate allocation. Leaves exposed to oxidants had small, but consistent, changes in watermore » relations over the summer growing season, compared to trees growing in filtered air. Other changes included decreased stomatal conductance (12%) and transpiration (9%) rates, and increased water pressure potentials (5%). While all responses were subtle, their cumulative impact over 4 years indicated that 'Valencia' orange trees were subject to increased ambient oxidant stress.« less
Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel
2017-07-13
A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
NASA Astrophysics Data System (ADS)
Thilker, David
2017-08-01
We request 17 orbits to conduct a pilot study to examine the effectiveness of the WFC3/UVIS F300X filter for studying fundamental problems in star formation in the low density regime. In principle, the broader bandpass and higher throughput of F300X can halve the required observing time relative to F275W, the filter of choice for studying young stellar populations in nearby galaxies. Together with F475W and F600LP, this X filter set may be as effective as standard UVIS broadband filters for characterizing the physical properties of such populations. We will observe 5 low surface brightness targets with a range of properties to test potential issues with F300X: the red tail to 4000A and a red leak beyond, ghosts, and the wider bandpass. Masses and ages of massive stars, young star clusters, and clumps derived from photometry from the X filter set will be compared with corresponding measurements from standard filters. Beyond testing, our program will provide the first sample spanning a range of LSB galaxy properties for which HST UV imaging will be obtained, and a glimpse into the ensemble properties of the quanta of star formation in these strange environments. The increased observing efficiency would make more tractable programs which require several tens to hundreds of orbits to aggregate sufficient numbers of massive stars, young star clusters, and clumps to build statistical samples. We are hopeful that our pilot observations will broadly enable high-resolution UV imaging exploration of the low density frontier of star formation while HST is still in good health.
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Developing topic-specific search filters for PubMed with click-through data.
Li, J; Lu, Z
2013-01-01
Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. We present an automated method to develop topic-specific filters on the basis of users' search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/
Developing Topic-Specific Search Filters for PubMed with Click-Through Data
Li, Jiao; Lu, Zhiyong
2013-01-01
Summary Objectives Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. Methods We present an automated method to develop topic-specific filters on the basis of users’ search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. Results We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Conclusion Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/ PMID:23666447
The influence of filtering and downsampling on the estimation of transfer entropy
Florin, Esther; von Papen, Michael; Timmermann, Lars
2017-01-01
Transfer entropy (TE) provides a generalized and model-free framework to study Wiener-Granger causality between brain regions. Because of its nonparametric character, TE can infer directed information flow also from nonlinear systems. Despite its increasing number of applications in neuroscience, not much is known regarding the influence of common electrophysiological preprocessing on its estimation. We test the influence of filtering and downsampling on a recently proposed nearest neighborhood based TE estimator. Different filter settings and downsampling factors were tested in a simulation framework using a model with a linear coupling function and two nonlinear models with sigmoid and logistic coupling functions. For nonlinear coupling and progressively lower low-pass filter cut-off frequencies up to 72% false negative direct connections and up to 26% false positive connections were identified. In contrast, for the linear model, a monotonic increase was only observed for missed indirect connections (up to 86%). High-pass filtering (1 Hz, 2 Hz) had no impact on TE estimation. After low-pass filtering interaction delays were significantly underestimated. Downsampling the data by a factor greater than the assumed interaction delay erased most of the transmitted information and thus led to a very high percentage (67–100%) of false negative direct connections. Low-pass filtering increases the number of missed connections depending on the filters cut-off frequency. Downsampling should only be done if the sampling factor is smaller than the smallest assumed interaction delay of the analyzed network. PMID:29149201
Effect of ECG filter settings on J-waves.
Nakagawa, Mikiko; Tsunemitsu, Chie; Katoh, Sayo; Kamiyama, Yukari; Sano, Nario; Ezaki, Kaori; Miyazaki, Hiroko; Teshima, Yasushi; Yufu, Kunio; Takahashi, Naohiko; Saikawa, Tetsunori
2014-01-01
While J-waves were observed in healthy populations, variations in their reported incidence may be partly explicable by the ECG filter setting. We obtained resting 12-lead ECG recordings in 665 consecutive patients and enrolled 112 (56 men, 56 women, mean age 59.3±16.1years) who manifested J-waves on ECGs acquired with a 150-Hz low-pass filter. We then studied the J-waves on individual ECGs to look for morphological changes when 25-, 35-, 75-, 100-, and 150Hz filters were used. The notching observed with the 150-Hz filter changed to slurring (42%) or was eliminated (28%) with the 25-Hz filter. Similarly, the slurring seen with the 150-Hz filter was eliminated on 71% of ECGs recorded with the 25-Hz filter. The amplitude of J-waves was significantly lower with 25- and 35-Hz than 75-, 100-, and 150-Hz filters (p<0.0001). The ECG filter setting significantly affects the J-wave morphology. © 2013.
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
Inferior Vena Cava Filtration in the Management of Venous Thromboembolism: Filtering the Data
Molvar, Christopher
2012-01-01
Venous thromboembolism (VTE) is a common cause of morbidity and mortality. This is especially true for hospitalized patients. Pulmonary embolism (PE) is the leading preventable cause of in-hospital mortality. The preferred method of both treatment and prophylaxis for VTE is anticoagulation. However, in a subset of patients, anticoagulation therapy is contraindicated or ineffective, and these patients often receive an inferior vena cava (IVC) filter. The sole purpose of an IVC filter is prevention of clinically significant PE. IVC filter usage has increased every year, most recently due to the availability of retrievable devices and a relaxation of thresholds for placement. Much of this recent growth has occurred in the trauma patient population given the high potential for VTE and frequent contraindication to anticoagulation. Retrievable filters, which strive to offer the benefits of permanent filters without time-sensitive complications, come with a new set of challenges including methods for filter follow-up and retrieval. PMID:23997414
Remaining useful life assessment of lithium-ion batteries in implantable medical devices
NASA Astrophysics Data System (ADS)
Hu, Chao; Ye, Hui; Jain, Gaurav; Schmidt, Craig
2018-01-01
This paper presents a prognostic study on lithium-ion batteries in implantable medical devices, in which a hybrid data-driven/model-based method is employed for remaining useful life assessment. The method is developed on and evaluated against data from two sets of lithium-ion prismatic cells used in implantable applications exhibiting distinct fade performance: 1) eight cells from Medtronic, PLC whose rates of capacity fade appear to be stable and gradually decrease over a 10-year test duration; and 2) eight cells from Manufacturer X whose rates appear to be greater and show sharp increase after some period over a 1.8-year test duration. The hybrid method enables online prediction of remaining useful life for predictive maintenance/control. It consists of two modules: 1) a sparse Bayesian learning module (data-driven) for inferring capacity from charge-related features; and 2) a recursive Bayesian filtering module (model-based) for updating empirical capacity fade models and predicting remaining useful life. A generic particle filter is adopted to implement recursive Bayesian filtering for the cells from the first set, whose capacity fade behavior can be represented by a single fade model; a multiple model particle filter with fixed-lag smoothing is proposed for the cells from the second data set, whose capacity fade behavior switches between multiple fade models.
A P-band SAR interference filter
NASA Technical Reports Server (NTRS)
Taylor, Victor B.
1992-01-01
The synthetic aperture radar (SAR) interference filter is an adaptive filter designed to reduce the effects of interference while minimizing the introduction of undesirable side effects. The author examines the adaptive spectral filter and the improvement in processed SAR imagery using this filter for Jet Propulsion Laboratory Airborne SAR (JPL AIRSAR) data. The quality of these improvements is determined through several data fidelity criteria, such as point-target impulse response, equivalent number of looks, SNR, and polarization signatures. These parameters are used to characterize two data sets, both before and after filtering. The first data set consists of data with the interference present in the original signal, and the second set consists of clean data which has been coherently injected with interference acquired from another scene.
Spatial filter system as an optical relay line
Hunt, John T.; Renard, Paul A.
1979-01-01
A system consisting of a set of spatial filters that are used to optically relay a laser beam from one position to a downstream position with minimal nonlinear phase distortion and beam intensity variation. The use of the device will result in a reduction of deleterious beam self-focusing and produce a significant increase in neutron yield from the implosion of targets caused by their irradiation with multi-beam glass laser systems.
Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.
Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara
2015-01-01
Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.
Lina, Ioan A; Lauer, Amanda M
2013-04-01
The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.
The effect of sampling rate and lowpass filters on saccades - A modeling approach.
Mack, David J; Belfanti, Sandro; Schwarz, Urs
2017-12-01
The study of eye movements has become popular in many fields of science. However, using the preprocessed output of an eye tracker without scrutiny can lead to low-quality or even erroneous data. For example, the sampling rate of the eye tracker influences saccadic peak velocity, while inadequate filters fail to suppress noise or introduce artifacts. Despite previously published guiding values, most filter choices still seem motivated by a trial-and-error approach, and a thorough analysis of filter effects is missing. Therefore, we developed a simple and easy-to-use saccade model that incorporates measured amplitude-velocity main sequences and produces saccades with a similar frequency content to real saccades. We also derived a velocity divergence measure to rate deviations between velocity profiles. In total, we simulated 155 saccades ranging from 0.5° to 60° and subjected them to different sampling rates, noise compositions, and various filter settings. The final goal was to compile a list with the best filter settings for each of these conditions. Replicating previous findings, we observed reduced peak velocities at lower sampling rates. However, this effect was highly non-linear over amplitudes and increasingly stronger for smaller saccades. Interpolating the data to a higher sampling rate significantly reduced this effect. We hope that our model and the velocity divergence measure will be used to provide a quickly accessible ground truth without the need for recording and manually labeling saccades. The comprehensive list of filters allows one to choose the correct filter for analyzing saccade data without resorting to trial-and-error methods.
Major, Kevin J; Poutous, Menelaos K; Ewing, Kenneth J; Dunnill, Kevin F; Sanghera, Jasbinder S; Aggarwal, Ishwar D
2015-09-01
Optical filter-based chemical sensing techniques provide a new avenue to develop low-cost infrared sensors. These methods utilize multiple infrared optical filters to selectively measure different response functions for various chemicals, dependent on each chemical's infrared absorption. Rather than identifying distinct spectral features, which can then be used to determine the identity of a target chemical, optical filter-based approaches rely on measuring differences in the ensemble response between a given filter set and specific chemicals of interest. Therefore, the results of such methods are highly dependent on the original optical filter choice, which will dictate the selectivity, sensitivity, and stability of any filter-based sensing method. Recently, a method has been developed that utilizes unique detection vector operations defined by optical multifilter responses, to discriminate between volatile chemical vapors. This method, comparative-discrimination spectral detection (CDSD), is a technique which employs broadband optical filters to selectively discriminate between chemicals with highly overlapping infrared absorption spectra. CDSD has been shown to correctly distinguish between similar chemicals in the carbon-hydrogen stretch region of the infrared absorption spectra from 2800-3100 cm(-1). A key challenge to this approach is how to determine which optical filter sets should be utilized to achieve the greatest discrimination between target chemicals. Previous studies used empirical approaches to select the optical filter set; however this is insufficient to determine the optimum selectivity between strongly overlapping chemical spectra. Here we present a numerical approach to systematically study the effects of filter positioning and bandwidth on a number of three-chemical systems. We describe how both the filter properties, as well as the chemicals in each set, affect the CDSD results and subsequent discrimination. These results demonstrate the importance of choosing the proper filter set and chemicals for comparative discrimination, in order to identify the target chemical of interest in the presence of closely matched chemical interferents. These findings are an integral step in the development of experimental prototype sensors, which will utilize CDSD.
Effectiveness of in-room air filtration and dilution ventilation for tuberculosis infection control.
Miller-Leiden, S; Lobascio, C; Nazaroff, W W; Macher, J M
1996-09-01
Tuberculosis (TB) is a public health problem that may pose substantial risks to health care workers and others. TB infection occurs by inhalation of airborne bacteria emitted by persons with active disease. We experimentally evaluated the effectiveness of in-room air filtration systems, specifically portable air filters (PAFs) and ceiling-mounted air filters (CMAFs), in conjunction with dilution ventilation, for controlling TB exposure in high-risk settings. For each experiment, a test aerosol was continuously generated and released into a full-sized room. With the in-room air filter and room ventilation system operating, time-averaged airborne particle concentrations were measured at several points. The effectiveness of in-room air filtration plus ventilation was determined by comparing particle concentrations with and without device operation. The four PAFs and three CMAFs we evaluated reduced room-average particle concentrations, typically by 30% to 90%, relative to a baseline scenario with two air-changes per hour of ventilation (outside air) only. Increasing the rate of air flow recirculating through the filter and/or air flow from the ventilation did not always increase effectiveness. Concentrations were generally higher near the emission source than elsewhere in the room. Both the air flow configuration of the filter and its placement within the room were important, influencing room air flow patterns and the spatial distribution of concentrations. Air filters containing efficient, but non-high efficiency particulate air (HEPA) filter media were as effective as air filters containing HEPA filter media.
Effectiveness of In-Room Air Filtration and Dilution Ventilation for Tuberculosis Infection Control.
Miller-Leiden, S; Lohascio, C; Nazaroff, W W; Macher, J M
1996-09-01
Tuberculosis (TB) is a public health problem that may pose substantial risks to health care workers and others. TB infection occurs by inhalation of airborne bacteria emitted by persons with active disease. We experimentally evaluated the effectiveness of in-room air filtration systems, specifically portable air filters (PAFs) and ceiling-mounted air filters (CMAFs), in conjunction with dilution ventilation, for controlling TB exposure in high-risk settings. For each experiment, a test aerosol was continuously generated and released into a full-sized room. With the in-room air filter and room ventilation system operating, time-averaged airborne particle concentrations were measured at several points. The effectiveness of in-room air filtration plus ventilation was determined by comparing particle concentrations with and without device operation. The four PAFs and three CMAFs we evaluated reduced room-average particle concentrations, typically by 30% to 90%, relative to a baseline scenario with two air-changes per hour of ventilation (outside air) only. Increasing the rate of air flow recirculating through the filter and/or air flow from the ventilation did not always increase effectiveness. Concentrations were generally higher near the emission source than elsewhere in the room. Both the air flow configuration of the filter and its placement within the room were important, influencing room air flow patterns and the spatial distribution of concentrations. Air filters containing efficient, but non-high efficiency particulate air (HEPA) filter media were as effective as air filters containing HEPA filter media.
SkyMapper Filter Set: Design and Fabrication of Large-Scale Optical Filters
NASA Astrophysics Data System (ADS)
Bessell, Michael; Bloxham, Gabe; Schmidt, Brian; Keller, Stefan; Tisserand, Patrick; Francis, Paul
2011-07-01
The SkyMapper Southern Sky Survey will be conducted from Siding Spring Observatory with u, v, g, r, i, and z filters that comprise glued glass combination filters with dimensions of 309 × 309 × 15 mm. In this article we discuss the rationale for our bandpasses and physical characteristics of the filter set. The u, v, g, and z filters are entirely glass filters, which provide highly uniform bandpasses across the complete filter aperture. The i filter uses glass with a short-wave pass coating, and the r filter is a complete dielectric filter. We describe the process by which the filters were constructed, including the processes used to obtain uniform dielectric coatings and optimized narrowband antireflection coatings, as well as the technique of gluing the large glass pieces together after coating using UV transparent epoxy cement. The measured passbands, including extinction and CCD QE, are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, A; Ahmad, S; Chen, Y
Purpose: To quantify the simulated mean absorbed dose per technique (cGy/mAs) from a commercially available microCT scanner using various filtration techniques. Methods: Monte Carlo simulations using the Geant4 toolkit (version 10) was used utilizing the standard electromagnetic physics model. The Quantum FX microCT scanner (PerkinElmer, Waltham, MA) was modeled incorporating measured energy spectra and spatial dimensions of nominal source-to-object (SOD) distances. The energy distribution was measured using a spectrometer (X-123CdTe, Amptek Inc., Bedford, USA) for the 90 kVp X-ray beams with various filters (including no filter, 1 mm, 2 mm, 3 mm, 4 mm Al and 0.2 mm Cu +more » 2.5 mm Al). The SOD was set to 154 mm, 104 mm, and 52 mm. A total of 10 million incident particles were processed per simulation. Cutout value was set to 0.1 mm for both photon and electron. The mean dose absorbed (cGy/per incident particle) in a PMMA phantom (length of 2 cm and radius of 3 cm) were recorded. Exposure measurements were taken using a Radcal 9095 system with a protocol of 90 kVp, 200 µA, and ∼12 s beam-On time for the various filters. Results: The mean absorbed dose per mAs for various filtrations and different SOD setups indicated that the dose decreased as the SOD increased and as the amount of filtration increased. For a given SOD, the dose was reduced by as much as ∼13.7% by varying the filter (from 0.2 mm Cu + 2.5 mm Al to no filter). The maximum dose was found to be 0.39 cGy/mAs (SOD of 5.196 cm, no filter) while the minimum dose value was 0.077 cGy/mAs (SOD of 15.4 cm, .2mm Cu + 2mm Al filter). Conclusion: This study estimates easily the mean dose for objects scanned with a microCT scanner with different filtration.« less
NASA Astrophysics Data System (ADS)
Sanz, Miguel; Ramos, Gonzalo; Moral, Andoni; Pérez, Carlos; Belenguer, Tomás; del Rosario Canchal, María; Zuluaga, Pablo; Rodriguez, Jose Antonio; Santiago, Amaia; Rull, Fernando; Instituto Nacional de Técnica Aeroespacial (INTA); Ingeniería de Sistemas para la Defesa de España S.A. (ISDEFE)
2016-10-01
Raman Laser Spectrometer (RLS) is the Pasteur Payload instruments of the ExoMars mission, within the ESA's Aurora Exploration Programme, that will perform for the first time in an out planetary mission Raman spectroscopy. RLS is composed by SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit). iOH focuses the excitation laser on the samples (excitation path), and collects the Raman emission from the sample (collection path, composed on collimation system and filtering system). The original design presented a high laser trace reaching to the detector, and although a certain level of laser trace was required for calibration purposes, the high level degrades the Signal to Noise Ratio confounding some Raman peaks.The investigation revealing that the laser trace was not properly filtered as well as the iOH opto-mechanical redesign are reported on. After the study of the Long Pass Filters Optical Density (OD) as a function of the filtering stage to the detector distance, a new set of filters (Notch filters) was decided to be evaluated. Finally, and in order to minimize the laser trace, a new collection path design (mainly consisting on that the collimation and filtering stages are now separated in two barrels, and on the kind of filters to be used) was required. Distance between filters and collimation stage first lens was increased, increasing the OD. With this new design and using two Notch filters, the laser trace was reduced to assumable values, as can be observed in the functional test comparison also reported on this paper.
Filter and Grid Resolution in DG-LES
NASA Astrophysics Data System (ADS)
Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman
2017-11-01
The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.
NASA Technical Reports Server (NTRS)
Jamora, Dennis A.
1993-01-01
Ground clutter interference is a major problem for airborne pulse Doppler radar operating at low altitudes in a look-down mode. With Doppler zero set at the aircraft ground speed, ground clutter rejection filtering is typically accomplished using a high-pass filter with real valued coefficients and a stopband notch centered at zero Doppler. Clutter spectra from the NASA Wind Shear Flight Experiments of l991-1992 show that the dominant clutter mode can be located away from zero Doppler, particularly at short ranges dominated by sidelobe returns. Use of digital notch filters with complex valued coefficients so that the stopband notch can be located at any Doppler frequency is investigated. Several clutter mode tracking algorithms are considered to estimate the Doppler frequency location of the dominant clutter mode. From the examination of night data, when a dominant clutter mode away from zero Doppler is present, complex filtering is able to significantly increase clutter rejection over use of a notch filter centered at zero Doppler.
Problems in the use of interference filters for spectrophotometric determination of total ozone
NASA Technical Reports Server (NTRS)
Basher, R. E.; Matthews, W. A.
1977-01-01
An analysis of the use of ultraviolet narrow-band interference filters for total ozone determination is given with reference to the New Zealand filter spectrophotometer under the headings of filter monochromaticity, temperature dependence, orientation dependence, aging, and specification tolerances and nonuniformity. Quantitative details of each problem are given, together with the means used to overcome them in the New Zealand instrument. The tuning of the instrument's filter center wavelengths to a common set of values by tilting the filters is also described, along with a simple calibration method used to adjust and set these center wavelengths.
Lee, Carson O; Boe-Hansen, Rasmus; Musovic, Sanin; Smets, Barth; Albrechtsen, Hans-Jørgen; Binning, Philip
2014-11-01
Biological rapid sand filters are often used to remove ammonium from groundwater for drinking water supply. They often operate under dynamic substrate and hydraulic loading conditions, which can lead to increased levels of ammonium and nitrite in the effluent. To determine the maximum nitrification rates and safe operating windows of rapid sand filters, a pilot scale rapid sand filter was used to test short-term increased ammonium loads, set by varying either influent ammonium concentrations or hydraulic loading rates. Ammonium and iron (flock) removal were consistent between the pilot and the full-scale filter. Nitrification rates and ammonia-oxidizing bacteria and archaea were quantified throughout the depth of the filter. The ammonium removal capacity of the filter was determined to be 3.4 g NH4-N m(-3) h(-1), which was 5 times greater than the average ammonium loading rate under reference operating conditions. The ammonium removal rate of the filter was determined by the ammonium loading rate, but was independent of both the flow and influent ammonium concentration individually. Ammonia-oxidizing bacteria and archaea were almost equally abundant in the filter. Both ammonium removal and ammonia-oxidizing bacteria density were strongly stratified, with the highest removal and ammonia-oxidizing bacteria densities at the top of the filter. Cell specific ammonium oxidation rates were on average 0.6 × 10(2) ± 0.2 × 10(2) fg NH4-N h(-1) cell(-1). Our findings indicate that these rapid sand filters can safely remove both nitrite and ammonium over a larger range of loading rates than previously assumed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computational Architecture of the Granular Layer of Cerebellum-Like Structures.
Bratby, Peter; Sneyd, James; Montgomery, John
2017-02-01
In the adaptive filter model of the cerebellum, the granular layer performs a recoding which expands incoming mossy fibre signals into a temporally diverse set of basis signals. The underlying neural mechanism is not well understood, although various mechanisms have been proposed, including delay lines, spectral timing and echo state networks. Here, we develop a computational simulation based on a network of leaky integrator neurons, and an adaptive filter performance measure, which allows candidate mechanisms to be compared. We demonstrate that increasing the circuit complexity improves adaptive filter performance, and relate this to evolutionary innovations in the cerebellum and cerebellum-like structures in sharks and electric fish. We show how recurrence enables an increase in basis signal duration, which suggest a possible explanation for the explosion in granule cell numbers in the mammalian cerebellum.
Filter for third order phase locked loops
NASA Technical Reports Server (NTRS)
Crow, R. B.; Tausworthe, R. C. (Inventor)
1973-01-01
Filters for third-order phase-locked loops are used in receivers to acquire and track carrier signals, particularly signals subject to high doppler-rate changes in frequency. A loop filter with an open-loop transfer function and set of loop constants, setting the damping factor equal to unity are provided.
Deep neural networks to enable real-time multimessenger astrophysics
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-02-01
Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacks, Robert; Stormo, Julie; Rose, Coralie
Data have demonstrated that filter media lose tensile strength and the ability to resist the effects of moisture as a function of age. Testing of new and aged filters needs to be conducted to correlate reduction of physical strength of HEPA media to the ability of filters to withstand upset conditions. Appendix C of the Nuclear Air Cleaning Handbook provides the basis for DOE’s HEPA filter service life guidance. However, this appendix also points out the variability of data, and it does not correlate performance of aged filters to degradation of media due to age. Funding awarded by NSR&D tomore » initiate full-scale testing of aged HEPA filters addresses the issue of correlating media degradation due to age with testing of new and aged HEPA filters under a generic design basis event set of conditions. This funding has accelerated the process of describing this study via: (1) establishment of a Technical Working Group of all stakeholders, (2) development and approval of a test plan, (3) development of testing and autopsy procedures, (4) acquiring an initial set of aged filters, (5) testing the initial set of aged filters, and (6) developing the filter test report content for each filter tested. This funding was very timely and has moved the project forward by at least three years. Activities have been correlated with testing conducted under DOE-EM funding for evaluating performance envelopes for AG-1 Section FC Separator and Separatorless filters. This coordination allows correlation of results from the NSR&D Aged Filter Study with results from testing new filters of the Separator and Separatorless Filter Study. DOE-EM efforts have identified approximately 100 more filters of various ages that have been stored under Level B conditions. NSR&D funded work allows a time for rigorous review among subject matter experts before moving forward with development of the testing matrix that will be used for additional filters. The NSR&D data sets are extremely valuable in as much as establishing a selfimproving, NQA-1 program capable of advancing the service lifetime study of HEPA filters. The data and reports are available for careful and critical review by subject matter experts before the next set of filters is tested and can be found in the appendices of this final report. NSR&D funds have not only initiated the Aged HEPA Filter Study alluded to in Appendix C of the NACH, but have also enhanced the technical integrity and effectiveness of all of the follow-on testing for this long-term study.« less
Matched spectral filter based on reflection holograms for analyte identification.
Cao, Liangcai; Gu, Claire
2009-12-20
A matched spectral filter set that provides automatic preliminary analyte identification is proposed and analyzed. Each matched spectral filter in the set containing the multiple spectral peaks corresponding to the Raman spectrum of a substance is capable of collecting the specified spectrum into the detector simultaneously. The filter set is implemented by multiplexed volume holographic reflection gratings. The fabrication of a matched spectral filter in an Fe:LiNbO(3) crystal is demonstrated to match the Raman spectrum of the sample Rhodamine 6G (R6G). An interference alignment method is proposed and used in the fabrication to ensure that the multiplexed gratings are in the same direction at a high angular accuracy of 0.0025 degrees . Diffused recording beams are used to control the bandwidth of the spectral peaks. The reflection spectrum of the filter is characterized using a modified Raman spectrometer. The result of the filter's reflection spectrum matches that of the sample R6G. A library of such matched spectral filters will facilitate a fast detection with a higher sensitivity and provide a capability for preliminary molecule identification.
Bandwidth tunable microwave photonic filter based on digital and analog modulation
NASA Astrophysics Data System (ADS)
Zhang, Qi; Zhang, Jie; Li, Qiang; Wang, Yubing; Sun, Xian; Dong, Wei; Zhang, Xindong
2018-05-01
A bandwidth tunable microwave photonic filter based on digital and analog modulation is proposed and experimentally demonstrated. The digital modulation is used to broaden the effective gain spectrum and the analog modulation is to get optical lines. By changing the symbol rate of data pattern, the bandwidth is tunable from 50 MHz to 700 MHz. The interval of optical lines is set according to the bandwidth of gain spectrum which is related to the symbol rate. Several times of bandwidth increase are achieved compared to a single analog modulation and the selectivity of the response is increased by 3.7 dB compared to a single digital modulation.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2015-01-01
Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.
Time-frequency filtering and synthesis from convex projections
NASA Astrophysics Data System (ADS)
White, Langford B.
1990-11-01
This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.
Metric for evaluation of filter efficiency in spectral cameras.
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2016-11-10
Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.
NASA Astrophysics Data System (ADS)
Pratsenka, S. V.; Voropai, E. S.; Belkin, V. G.
2018-01-01
Rapid measurement of the moisture content of dehydrated residues is a critical problem, the solution of which will increase the efficiency of treatment facilities and optimize the process of applying flocculants. The ability to determine the moisture content of dehydrated residues using a meter operating on the IR reflectance principle was confirmed experimentally. The most suitable interference filters were selected based on an analysis of the obtained diffuse reflectance spectrum of the dehydrated residue in the range 1.0-2.7 μm. Calibration curves were constructed and compared for each filter set. A measuring filter with a transmittance maximum at 1.19 μm and a reference filter with a maximum at 1.3 μm gave the best agreement with the laboratory measurements.
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts.
Feulner, Markus; Hagen, Gunter; Hottner, Kathrin; Redel, Sabrina; Müller, Andreas; Moos, Ralf
2017-02-18
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF). The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor.
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts
Feulner, Markus; Hagen, Gunter; Hottner, Kathrin; Redel, Sabrina; Müller, Andreas; Moos, Ralf
2017-01-01
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF). The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor. PMID:28218700
Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T
2009-07-01
Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.
Prevalence and clinical implications of improper filter settings in routine electrocardiography.
Kligfield, Paul; Okin, Peter M
2007-03-01
High- and low-filter bandwidth governs the fidelity of electrocardiographic waveforms, including the durations used in established criteria for infarction, the amplitudes used for the diagnosis of ventricular hypertrophy, and the accuracy of the magnitudes of ST-segment elevation and depression. Electrocardiographs allow users to reset high- and low-filter settings for special electrocardiographic applications, but these may be used inappropriately. To examine the prevalence of standard and nonstandard electrocardiographic filtering at 1 general medical community, 256 consecutive outpatient electrocardiograms (ECGs) submitted in advance of ambulatory or same-day admission surgery during a 3-week period were examined. ECGs were considered to meet standards for low-frequency cutoff when equal to 0.05 Hz and to meet standards for high-frequency cutoff when equal to 100 Hz, according to American Heart Association recommendations established in 1975. Only 25% of ECGs (65 of 256) conformed to recommended standards; 75% of ECGs (191 of 254) did not. The most prevalent deviation from standard was reduced high-frequency cutoff, which was present in 96% of tracings with nonstandard bandwidth (most commonly 40 Hz). Increased low-frequency cutoff was present in 62% of ECGs in which it was documented. In conclusion, improper electrocardiographic filtering, with potentially adverse clinical consequences, is highly prevalent at 1 large general medical community and is likely a generalized problem. This problem should be resolvable by targeted educational efforts to reinforce technical standards in electrocardiography.
NASA Astrophysics Data System (ADS)
Pak, A.; Correa, J.; Adams, M.; Clark, D.; Delande, E.; Houssineau, J.; Franco, J.; Frueh, C.
2016-09-01
Recently, the growing number of inactive Resident Space Objects (RSOs), or space debris, has provoked increased interest in the field of Space Situational Awareness (SSA) and various investigations of new methods for orbital object tracking. In comparison with conventional tracking scenarios, state estimation of an orbiting object entails additional challenges, such as orbit determination and orbital state and covariance propagation in the presence of highly nonlinear system dynamics. The sensors which are available for detecting and tracking space debris are prone to multiple clutter measurements. Added to this problem, is the fact that it is unknown whether or not a space debris type target is present within such sensor measurements. Under these circumstances, traditional single-target filtering solutions such as Kalman Filters fail to produce useful trajectory estimates. The recent Random Finite Set (RFS) based Finite Set Statistical (FISST) framework has yielded filters which are more appropriate for such situations. The RFS based Joint Target Detection and Tracking (JoTT) filter, also known as the Bernoulli filter, is a single target, multiple measurements filter capable of dealing with cluttered and time-varying backgrounds as well as modeling target appearance and disappearance in the scene. Therefore, this paper presents the application of the Gaussian mixture-based JoTT filter for processing measurements from Chilbolton Advanced Meteorological Radar (CAMRa) which contain both defunct and operational satellites. The CAMRa is a fully-steerable radar located in southern England, which was recently modified to be used as a tracking asset in the European Space Agency SSA program. The experiments conducted show promising results regarding the capability of such filters in processing cluttered radar data. The work carried out in this paper was funded by the USAF Grant No. FA9550-15-1-0069, Chilean Conicyt - Fondecyt grant number 1150930, EU Erasmus Mundus MSc Scholarship, Defense Science and Technology Laboratory (DSTL), U. K., and the Chilean Conicyt, Fondecyt project grant number 1150930.
Yusa, Akiko; Toneri, Makoto; Masuda, Taisuke; Ito, Seiji; Yamamoto, Shuhei; Okochi, Mina; Kondo, Naoto; Iwata, Hiroji; Yatabe, Yasushi; Ichinosawa, Yoshiyuki; Kinuta, Seichin; Kondo, Eisaku; Honda, Hiroyuki; Arai, Fumihito; Nakanishi, Hayao
2014-01-01
Circulating tumor cells (CTCs) in the blood of patients with epithelial malignancies provide a promising and minimally invasive source for early detection of metastasis, monitoring of therapeutic effects and basic research addressing the mechanism of metastasis. In this study, we developed a new filtration-based, sensitive CTC isolation device. This device consists of a 3-dimensional (3D) palladium (Pd) filter with an 8 µm-sized pore in the lower layer and a 30 µm-sized pocket in the upper layer to trap CTCs on a filter micro-fabricated by precise lithography plus electroforming process. This is a simple pump-less device driven by gravity flow and can enrich CTCs from whole blood within 20 min. After on-device staining of CTCs for 30 min, the filter cassette was removed from the device, fixed in a cassette holder and set up on the upright fluorescence microscope. Enumeration and isolation of CTCs for subsequent genetic analysis from the beginning were completed within 1.5 hr and 2 hr, respectively. Cell spike experiments demonstrated that the recovery rate of tumor cells from blood by this Pd filter device was more than 85%. Single living tumor cells were efficiently isolated from these spiked tumor cells by a micromanipulator, and KRAS mutation, HER2 gene amplification and overexpression, for example, were successfully detected from such isolated single tumor cells. Sequential analysis of blood from mice bearing metastasis revealed that CTC increased with progression of metastasis. Furthermore, a significant increase in the number of CTCs from the blood of patients with metastatic breast cancer was observed compared with patients without metastasis and healthy volunteers. These results suggest that this new 3D Pd filter-based device would be a useful tool for the rapid, cost effective and sensitive detection, enumeration, isolation and genetic analysis of CTCs from peripheral blood in both preclinical and clinical settings. PMID:24523941
Detection of Bacillus spores using PCR and FTA filters.
Lampel, Keith A; Dyer, Deanne; Kornegay, Leroy; Orlandi, Palmer A
2004-05-01
Emphasis has been placed on developing and implementing rapid detection systems for microbial pathogens. We have explored the utility of expanding FTA filter technology for the preparation of template DNA for PCR from bacterial spores. Isolated spores from several Bacillus spp., B. subtilis, B. cereus, and B. megaterium, were applied to FTA filters, and specific DNA products were amplified by PCR. Spore preparations were examined microscopically to ensure that the presence of vegetative cells, if any, did not yield misleading results. PCR primers SRM86 and SRM87 targeted a conserved region of bacterial rRNA genes, whereas primers Bsub5F and Bsub3R amplified a product from a conserved sequence of the B. subtilis rRNA gene. With the use of the latter set of primers for nested PCR, the sensitivity of the PCR-based assay was increased. Overall, 53 spores could be detected after the first round of PCR, and the sensitivity was increased to five spores by nested PCR. FTA filters are an excellent platform to remove PCR inhibitors and have universal applications for environmental, clinical, and food samples.
Folmsbee, Martha
2015-01-01
Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.
Silica dust exposure: Effect of filter size to compliance determination
NASA Astrophysics Data System (ADS)
Amran, Suhaily; Latif, Mohd Talib; Khan, Md Firoz; Leman, Abdul Mutalib; Goh, Eric; Jaafar, Shoffian Amin
2016-11-01
Monitoring of respirable dust was performed using a set of integrated sampling system consisting of sampling pump attached with filter media and separating device such as cyclone or special cassette. Based on selected method, filter sizes are either 25 mm or 37 mm poly vinyl chloride (PVC) filter. The aim of this study was to compare performance of two types of filter during personal respirable dust sampling for silica dust under field condition. The comparison strategy focused on the final compliance judgment based on both dataset. Eight hour parallel sampling of personal respirable dust exposure was performed among 30 crusher operators at six quarries. Each crusher operator was attached with parallel set of integrated sampling train containing either 25 mm or 37 mm PVC filter. Each set consisted of standard flow SKC sampler, attached with SKC GS3 cyclone and 2 pieces cassette loaded with 5.0 µm of PVC filter. Samples were analyzed by gravimetric technique. Personal respirable dust exposure between the two types of filters indicated significant positive correlation (p < 0.05) with moderate relationship (r2 = 0.6431). Personal exposure based on 25 mm PVC filter indicated 0.1% non-compliance to overall data while 37 mm PVC filter indicated similar findings at 0.4 %. Both data showed similar arithmetic mean(AM) and geometric mean(GM). In overall we concluded that personal respirable dust exposure either based on 25mm or 37mm PVC filter will give similar compliance determination. Both filters are reliable to be used in respirable dust monitoring for silica dust related exposure.
Full complex spatial filtering with a phase mostly DMD. [Deformable Mirror Device
NASA Technical Reports Server (NTRS)
Florence, James M.; Juday, Richard D.
1991-01-01
A new technique for implementing fully complex spatial filters with a phase mostly deformable mirror device (DMD) light modulator is described. The technique combines two or more phase-modulating flexure-beam mirror elements into a single macro-pixel. By manipulating the relative phases of the individual sub-pixels within the macro-pixel, the amplitude and the phase can be independently set for this filtering element. The combination of DMD sub-pixels into a macro-pixel is accomplished by adjusting the optical system resolution, thereby trading off system space bandwidth product for increased filtering flexibility. Volume in the larger dimensioned space, space bandwidth-complex axes count, is conserved. Experimental results are presented mapping out the coupled amplitude and phase characteristics of the individual flexure-beam DMD elements and demonstrating the independent control of amplitude and phase in a combined macro-pixel. This technique is generally applicable for implementation with any type of phase modulating light modulator.
Rotscholl, Ingo; Trampert, Klaus; Krüger, Udo; Perner, Martin; Schmidt, Franz; Neumann, Cornelius
2015-11-16
To simulate and optimize optical designs regarding perceived color and homogeneity in commercial ray tracing software, realistic light source models are needed. Spectral rayfiles provide angular and spatial varying spectral information. We propose a spectral reconstruction method with a minimum of time consuming goniophotometric near field measurements with optical filters for the purpose of creating spectral rayfiles. Our discussion focuses on the selection of the ideal optical filter combination for any arbitrary spectrum out of a given filter set by considering measurement uncertainties with Monte Carlo simulations. We minimize the simulation time by a preselection of all filter combinations, which bases on factorial design.
Degradation of electro-optic components aboard LDEF
NASA Technical Reports Server (NTRS)
Blue, M. D.
1993-01-01
Remeasurement of the properties of a set of electro-optic components exposed to the low-earth environment aboard the Long Duration Exposure Facility (LDEF) indicates that most components survived quite well. Typical components showed some effects related to the space environment unless well protected. The effects were often small but significant. Results for semiconductor infrared detectors, lasers, and LED's, as well as filters, mirrors, and black paints are described. Semiconductor detectors and emitters were scarred but reproduced their original characteristics. Spectral characteristics of multi-layer dielectric filters and mirrors were found to be altered and degraded. Increased absorption in black paints indicates an increase in absorption sites, giving rise to enhanced performance as coatings for baffles and sunscreens.
Economical Emission-Line Mapping: ISM Properties of Nearby Protogalaxy Analogs
NASA Astrophysics Data System (ADS)
Monkiewicz, Jacqueline A.
2017-01-01
Optical emission line imaging can produce a wealth of information about the conditions of the interstellar medium, but a full set of custom emission-line filters for a professional-grade telescope camera can cost many thousands of dollars. A cheaper alternative is to use commercially-produced 2-inch narrow-band astrophotography filters. In order to use these standardized filters with professional-grade telescope cameras, custom filter mounts must be manufactured for each individual filter wheel. These custom filter adaptors are produced by 3-D printing rather than standard machining, which further lowers the total cost.I demonstrate the feasibility of this technique with H-alpha, H-beta, and [OIII] emission line mapping of the low metallicity star-forming galaxies IC10 and NGC 1569, taken with my astrophotography filter set on three different 2-meter class telescopes in Southern Arizona.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
Effectiveness of adverse effects search filters: drugs versus medical devices.
Farrah, Kelly; Mierzwinski-Urban, Monika; Cimon, Karen
2016-07-01
The study tested the performance of adverse effects search filters when searching for safety information on medical devices, procedures, and diagnostic tests in MEDLINE and Embase. The sensitivity of 3 filters was determined using a sample of 631 references from 131 rapid reviews related to the safety of health technologies. The references were divided into 2 sets by type of intervention: drugs and nondrug health technologies. Keyword and indexing analysis were performed on references from the nondrug testing set that 1 or more of the filters did not retrieve. For all 3 filters, sensitivity was lower for nondrug health technologies (ranging from 53%-87%) than for drugs (88%-93%) in both databases. When tested on the nondrug health technologies set, sensitivity was lower in Embase (ranging from 53%-81%) than in MEDLINE (67%-87%) for all filters. Of the nondrug records that 1 or more of the filters missed, 39% of the missed MEDLINE records and 18% of the missed Embase records did not contain any indexing terms related to adverse events. Analyzing the titles and abstracts of nondrug records that were missed by any 1 filter, the most commonly used keywords related to adverse effects were: risk, complications, mortality, contamination, hemorrhage, and failure. In this study, adverse effects filters were less effective at finding information about the safety of medical devices, procedures, and tests compared to information about the safety of drugs.
63. (Credit JTL) Filter room looking east from doorway of ...
63. (Credit JTL) Filter room looking east from doorway of 1887 high service room. Remodelled Hyatt tub filters are in foreground; remodelled New York horizontal pressure filters are in background. These two sets of filters were retired in 1942. - McNeil Street Pumping Station, McNeil Street & Cross Bayou, Shreveport, Caddo Parish, LA
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Filter Feeding, Chaotic Filtration, and a Blinking Stokeslet
NASA Astrophysics Data System (ADS)
Blake, J. R.; Otto, S. R.; Blake, D. A.
The filtering mechanisms in bivalve molluscs, such as the mussel Mytilus edulis, and in sessile organisms, such as Vorticella or Stentor, involve complex fluid mechanical phenomena. In the former example, three different sets of cilia serving different functions are involved in the process whereas in the sessile organisms the flexibility and contractile nature of the stalk may play an important role in increasing the filtering efficiency of the organisms. In both cases, beating microscopic cilia are the ``engines'' driving the fluid motion, so the fluid mechanics will be dominated entirely by viscous forces. A fluid mechanical model is developed for the filtering mechanism in mussels that enables estimates to be made of the pressure drop through the gill filaments due to (i) latero-frontal filtering cilia, (ii) the lateral (pumping) cilia, and (iii) through the non-ciliated zone of the ventral end of the filament. The velocity profile across the filaments indicates that a backflow can occur in the centre of the channel leading to the formation of two ``standing'' eddies which may drive particles towards the mucus-laden short cilia, the third set of cilia. Filter feeding in the sessile organisms is modelled by a point force above a rigid boundary. The point force periodically changes its point of application according to a given protocol (a blinking stokeslet). The resulting fluid field is illustrated via Poincaré sections and particle dispersion-showing the potential for a much improved filtering efficiency. Returning to filter feeding in bivalve molluscs, this concept is extended to a pair of blinking stokeslets above a rigid boundary to give insight into possible mechanisms for movement of food particles onto the short mucus-bearing cilia. The appendix contains a Latin and English version of an ``Ode of Achievement'' in celebration of Sir James Lighthill's contributions to mathematics and fluid mechanics.
Feld, Louise; Nielsen, Tue Kjærgaard; Hansen, Lars Hestbjerg; Aamand, Jens
2015-01-01
In this study, we investigated the establishment of natural bacterial degraders in a sand filter treating groundwater contaminated with the phenoxypropionate herbicides (RS)-2-(4-chloro-2-methylphenoxy)propanoic acid (MCPP) and (RS)-2-(2,4-dichlorophenoxy)propanoic acid (DCPP) and the associated impurity/catabolite 4-chlorophenoxypropanoic acid (4-CPP). A pilot facility was set up in a contaminated landfill site. Anaerobic groundwater was pumped up and passed through an aeration basin and subsequently through a rapid sand filter, which is characterized by a short residence time of the water in the filter. For 3 months, the degradation of DCPP, MCPP, and 4-CPP in the sand filter increased to 15 to 30% of the inlet concentration. A significant selection for natural bacterial herbicide degraders also occurred in the sand filter. Using a most-probable-number (MPN) method, we found a steady increase in the number of culturable phenoxypropionate degraders, reaching approximately 5 × 105 degraders per g sand by the end of the study. Using a quantitative PCR targeting the two phenoxypropionate degradation genes, rdpA and sdpA, encoding stereospecific dioxygenases, a parallel increase was observed, but with the gene copy numbers being about 2 to 3 log units higher than the MPN. In general, the sdpA gene was more abundant than the rdpA gene, and the establishment of a significant population of bacteria harboring sdpA occurred faster than the establishment of an rdpA gene-carrying population. The identities of the specific herbicide degraders in the sand filter were assessed by Illumina MiSeq sequencing of 16S rRNA genes from sand filter samples and from selected MPN plate wells. We propose a list of potential degrader bacteria involved in herbicide degradation, including representatives belonging to the Comamonadaceae and Sphingomonadales. PMID:26590282
Hedegaard, Mathilde J; Albrechtsen, Hans-Jørgen
2014-01-01
Filter sand samples, taken from aerobic rapid sand filters used for treating groundwater at three Danish waterworks, were investigated for their pesticide removal potential and to assess the kinetics of the removal process. Microcosms were set up with filter sand, treated water, and the pesticides or metabolites mecoprop (MCPP), bentazone, glyphosate and p-nitrophenol were applied in initial concentrations of 0.03-2.4 μg/L. In all the investigated waterworks the concentration of pesticides in the water decreased - MCPP decreased to 42-85%, bentazone to 15-35%, glyphosate to 7-14% and p-nitrophenol 1-3% - from the initial concentration over a period of 6-13 days. Mineralisation of three out of four investigated pesticides was observed at Sjælsø waterworks Plant II - up to 43% of the initial glyphosate was mineralised within six days. At Sjælsø waterworks Plant II the removal kinetics of bentazone revealed that less than 30 min was needed to remove 50% of the bentazone at all the tested initial concentrations (0.1-2.4 μg/L). Increased oxygen availability led to greater and faster removal of bentazone in the microcosms. After 1 h, bentazone removal (an initial bentazone concentration of 0.1 μg/L) increased from 0.21%/g filter sand to 0.75%/g filter sand, when oxygen availability was increased from 0.28 mg O2/g filter sand to 1.09 mg O2/g filter sand. Bentazone was initially cleaved in the removal process. A metabolite, which contained the carbonyl group, was removed rapidly from the water phase and slowly mineralised after 24 h, while a metabolite which contained the benzene-ring was still present in the water phase. However, the microbial removal of this metabolite was initiated over seven days. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wadhwa, Vibhor; Trivedi, Premal S; Ali, Sumera; Ryu, Robert K; Pezeshkmehr, Amir
2018-02-01
Inferior vena cava (IVC) filter placement in children has been described in literature, but there is variability with regard to their indications. No nationally representative study has been done to compare practice patterns of filter placements at adult and children's hospitals. To perform a nationally representative comparison of IVC filter placement practices in children at adult and children's hospitals. The 2012 Kids' Inpatient Database was searched for IVC filter placements in children <18 years of age. Using the International Classification of Diseases, 9th Revision (ICD-9) code for filter insertion (38.7), IVC filter placements were identified. A small number of children with congenital cardiovascular anomalies codes were excluded to improve specificity of the code used to identify filter placement. Filter placements were further classified by patient demographics, hospital type (children's and adult), United States geographic region, urban/rural location, and teaching status. Statistical significance of differences between children's or adult hospitals was determined using the Wilcoxon rank sum test. A total of 618 IVC filter placements were identified in children <18 years (367 males, 251 females, age range: 5-18 years) during 2012. The majority of placements occurred in adult hospitals (573/618, 92.7%). Significantly more filters were placed in the setting of venous thromboembolism in children's hospitals (40/44, 90%) compared to adult hospitals (246/573, 43%) (P<0.001). Prophylactic filters comprised 327/573 (57%) at adult hospitals, with trauma being the most common indication (301/327, 92%). The mean length of stay for patients receiving filters was 24.5 days in children's hospitals and 18.4 days in adult hospitals. The majority of IVC filters in children are placed in adult hospital settings. Children's hospitals are more likely to place therapeutic filters for venous thromboembolism, compared to adult hospitals where the prophylactic setting of trauma predominates.
NASA Astrophysics Data System (ADS)
Brakensiek, Nickolas L.; Kidd, Brian; Mesawich, Michael; Stevens, Don, Jr.; Gotlinsky, Barry
2003-06-01
A design of experiment (DOE) was implemented to show the effects of various point of use filters on the coat process. The DOE takes into account the filter media, pore size, and pumping means, such as dispense pressure, time, and spin speed. The coating was executed on a TEL Mark 8 coat track, with an IDI M450 pump, and PALL 16 stack Falcon filters. A KLA 2112 set at 0.69 μm pixel size was used to scan the wafers to detect and identify the defects. The process found for DUV42P to maintain a low defect coating irrespective of the filter or pore size is a high start pressure, low end pressure, low dispense time, and high dispense speed. The IDI M450 pump has the capability to compensate for bubble type defects by venting the defects out of the filter before the defects are in the dispense line and the variable dispense rate allows the material in the dispense line to slow down at the end of dispense and not create microbubbles in the dispense line or tip. Also the differential pressure sensor will alarm if the pressure differential across the filter increases over a user-determined setpoint. The pleat design allows more surface area in the same footprint to reduce the differential pressure across the filter and transport defects to the vent tube. The correct low defect coating process will maximize the advantage of reducing filter pore size or changing the filter media.
Privacy-Preserving Distributed Information Sharing
2006-07-01
80 B.2.4 Analysis for Bloom Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 B.3 Details of One...be chosen by slightly adjusting the analysis given in the proof of Theorem 26. 59 Using Bloom Filters. Bloom filters provide a compact probabilistic...representation of set membership [6]. Instead of using T filters, we can use a combined Bloom filter. This achieves the same asymptotic communication
Characteristics of spectro-temporal modulation frequency selectivity in humans.
Oetjen, Arne; Verhey, Jesko L
2017-03-01
There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Filter-based chemical sensors for hazardous materials
NASA Astrophysics Data System (ADS)
Major, Kevin J.; Ewing, Kenneth J.; Poutous, Menelaos K.; Sanghera, Jasbinder S.; Aggarwal, Ishwar D.
2014-05-01
The development of new techniques for the detection of homemade explosive devices is an area of intense research for the defense community. Such sensors must exhibit high selectivity to detect explosives and/or explosives related materials in a complex environment. Spectroscopic techniques such as FTIR are capable of discriminating between the volatile components of explosives; however, there is a need for less expensive systems for wide-range use in the field. To tackle this challenge we are investigating the use of multiple, overlapping, broad-band infrared (IR) filters to enable discrimination of volatile chemicals associated with an explosive device from potential background interferants with similar chemical signatures. We present an optical approach for the detection of fuel oil (the volatile component in ammonium nitrate-fuel oil explosives) that relies on IR absorption spectroscopy in a laboratory environment. Our proposed system utilizes a three filter set to separate the IR signals from fuel oil and various background interferants in the sample headspace. Filter responses for the chemical spectra are calculated using a Gaussian filter set. We demonstrate that using a specifically chosen filter set enables discrimination of pure fuel oil, hexanes, and acetone, as well as various mixtures of these components. We examine the effects of varying carrier gasses and humidity on the collected spectra and corresponding filter response. We study the filter response on these mixtures over time as well as present a variety of methods for observing the filter response functions to determine the response of this approach to detecting fuel oil in various environments.
Vena Cava Filter Retrieval with Aorto-Iliac Arterial Strut Penetration.
Holly, Brian P; Gaba, Ron C; Lessne, Mark L; Lewandowski, Robert J; Ryu, Robert K; Desai, Kush R; Sing, Ronald F
2018-05-03
To evaluate the safety and technical success of inferior vena cava (IVC) filter retrieval in the setting of aorto-iliac arterial strut penetration. IVC filter registries from six large United States IVC filter retrieval practices were retrospectively reviewed to identify patients who underwent IVC filter retrieval in the setting of filter strut penetration into the adjacent aorta or iliac artery. Patient demographics, implant duration, indication for placement, IVC filter type, retrieval technique and technical success, adverse events, and post procedural clinical outcomes were identified. Arterial penetration was determined based on pre-procedure CT imaging in all cases. The IVC filter retrieval technique used was at the discretion of the operating physician. Seventeen patients from six US centers who underwent retrieval of an IVC filter with at least one strut penetrating either the aorta or iliac artery were identified. Retrieval technical success rate was 100% (17/17), without any major adverse events. Post-retrieval follow-up ranging from 10 days to 2 years (mean 4.6 months) was available in 12/17 (71%) patients; no delayed adverse events were encountered. Findings from this series suggest that chronically indwelling IVC filters with aorto-iliac arterial strut penetration may be safely retrieved.
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
Filter replacement lifetime prediction
Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.
2017-10-25
Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel
2016-01-01
This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422
New device for monitoring the colors of the night
NASA Astrophysics Data System (ADS)
Spoelstra, Henk
2014-05-01
The introduction of LED lighting in the outdoor environment may increase the amount of blue light in the night sky color spectrum. This can cause more light pollution due to Rayleigh scattering of the shorter wavelengths. Blue light may also have an impact on circadian rhythm of humans due to the suppression of melatonin. At present no long-term data sets of the color spectrum of the night sky are available. In order to facilitate the monitoring of levels and variations in the night sky spectrum, a low cost multi-filter instrument has been developed. Design considerations are described as well as the choice of suitable filters, which are critical - especially in the green wavelength band from 500 to 600 nm. Filters from the optical industry were chosen for this band because available astronomical filters exclude some or all of the low and high-pressure sodium lines from lamps, which are important in light pollution research. Correction factors are calculated to correct for the detector response and filter transmissions. Results at a suburban monitoring station showed that the light levels between 500 and 600 nm are dominant during clear and cloudy skies. The relative contribution of blue light increases with a clear moonless night sky. The change in color spectrum of the night sky under moonlit skies is more complex and is still under study.
Adaptive Filter Design Using Type-2 Fuzzy Cerebellar Model Articulation Controller.
Lin, Chih-Min; Yang, Ming-Shu; Chao, Fei; Hu, Xiao-Min; Zhang, Jun
2016-10-01
This paper aims to propose an efficient network and applies it as an adaptive filter for the signal processing problems. An adaptive filter is proposed using a novel interval type-2 fuzzy cerebellar model articulation controller (T2FCMAC). The T2FCMAC realizes an interval type-2 fuzzy logic system based on the structure of the CMAC. Due to the better ability of handling uncertainties, type-2 fuzzy sets can solve some complicated problems with outstanding effectiveness than type-1 fuzzy sets. In addition, the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so that the convergence of the filtering error can be guaranteed. In order to demonstrate the performance of the proposed adaptive T2FCMAC filter, it is tested in signal processing applications, including a nonlinear channel equalization system, a time-varying channel equalization system, and an adaptive noise cancellation system. The advantages of the proposed filter over the other adaptive filters are verified through simulations.
Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective
Qian, Xiaoning; Dougherty, Edward R.
2017-01-01
The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268
Bio-knowledge based filters improve residue-residue contact prediction accuracy.
Wozniak, P P; Pelc, J; Skrzypecki, M; Vriend, G; Kotulska, M
2018-05-29
Residue-residue contact prediction through direct coupling analysis has reached impressive accuracy, but yet higher accuracy will be needed to allow for routine modelling of protein structures. One way to improve the prediction accuracy is to filter predicted contacts using knowledge about the particular protein of interest or knowledge about protein structures in general. We focus on the latter and discuss a set of filters that can be used to remove false positive contact predictions. Each filter depends on one or a few cut-off parameters for which the filter performance was investigated. Combining all filters while using default parameters resulted for a test-set of 851 protein domains in the removal of 29% of the predictions of which 92% were indeed false positives. All data and scripts are available from http://comprec-lin.iiar.pwr.edu.pl/FPfilter/. malgorzata.kotulska@pwr.edu.pl. Supplementary data are available at Bioinformatics online.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
Noble, Stephen R; Horstwood, Matthew S A; Davy, Pamela; Pashley, Vanessa; Spiro, Baruch; Smith, Steve
2008-07-01
Pb isotope compositions of biologically significant PM(10) atmospheric particulates from a busy roadside location in London UK were measured using solution- and laser ablation-mode MC-ICP-MS. The solution-mode data for PM(10) sampled between 1998-2001 document a dramatic shift to increasingly radiogenic compositions as leaded petrol was phased out. LA-MC-ICP-MS isotope analysis, piloted on a subset of the available samples, is shown to be a potential reconnaissance analytical technique. PM(10) particles trapped on quartz filters were liberated from the filter surface, without ablating the filter substrate, using a 266 nm UV laser and a dynamic, large diameter, low-fluence ablation protocol. The Pb isotope evolution noted in the London data set obtained by both analytical protocols is similar to that observed elsewhere in Western Europe following leaded petrol elimination. The data therefore provide important baseline isotope composition information useful for continued UK atmospheric monitoring through the early 21(st) century.
NASA Astrophysics Data System (ADS)
Prabhat, Prashant; Peet, Michael; Erdogan, Turan
2016-03-01
In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).
A filtering approach to edge preserving MAP estimation of images.
Humphrey, David; Taubman, David
2011-05-01
The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
40 CFR 86.1434 - Equipment preparation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty... the device(s) for removing water from the exhaust sample and the sample filter(s). Remove any water from the water trap(s). Clean and replace the filter(s) as necessary. (c) Set the zero and span points...
Herbst, Daniel P
2014-09-01
Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient's systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26-33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique.
Herbst, Daniel P.
2014-01-01
Abstract: Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient’s systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26–33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique. PMID:26357790
Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg
2017-10-01
This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.
Analysis on regulation strategies for extending service life of hydropower turbines
NASA Astrophysics Data System (ADS)
Yang, W.; Norrlund, P.; Yang, J.
2016-11-01
Since a few years, there has been a tendency that hydropower turbines experience fatigue to a greater extent, due to increasingly more regulation movements of governor actuators. The aim of this paper is to extend the service life of hydropower turbines, by reasonably decreasing the guide vane (GV) movements with appropriate regulation strategies, e.g. settings of PI (proportional-integral) governor parameters and controller filters. The accumulated distance and number of GV movements are the two main indicators of this study. The core method is to simulate the long-term GV opening of Francis turbines with MATLAB/Simulink, based on a sequence of one-month measurements of the Nordic grid frequency. Basic theoretical formulas are also discussed and compared to the simulation results, showing reasonable correspondence. Firstly, a model of a turbine governor is discussed and verified, based on on-site measurements of a Swedish hydropower plant. Then, the influence of governor parameters is discussed. Effects of different settings of controller filters (e.g. dead zone, floating dead zone and linear filter) are also examined. Moreover, a change in GV movement might affect the quality of the frequency control. This is also monitored via frequency deviation characteristics, determined by elementary simulations of the Nordic power system. The results show how the regulation settings affect the GV movements and frequency quality, supplying suggestions for optimizing the hydropower turbine operation for decreasing the wear and tear.
Exact reconstruction analysis/synthesis filter banks with time-varying filters
NASA Technical Reports Server (NTRS)
Arrowood, J. L., Jr.; Smith, M. J. T.
1993-01-01
This paper examines some of the analysis/synthesis issues associated with FIR time-varying filter banks where the filter bank coefficients are allowed to change in response to the input signal. Several issues are identified as being important in order to realize performance gains from time-varying filter banks in image coding applications. These issues relate to the behavior of the filters as transition from one set of filter banks to another occurs. Lattice structure formulations for the time varying filter bank problem are introduced and discussed in terms of their properties and transition characteristics.
Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien
2010-04-23
This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.
Development and evaluation of evidence-based nursing (EBN) filters and related databases*
Lavin, Mary A.; Krieger, Mary M.; Meyer, Geralyn A.; Spasser, Mark A.; Cvitan, Tome; Reese, Cordie G.; Carlson, Judith H.; Perry, Anne G.; McNary, Patricia
2005-01-01
Objectives: Difficulties encountered in the retrieval of evidence-based nursing (EBN) literature and recognition of terminology, research focus, and design differences between evidence-based medicine and nursing led to the realization that nursing needs its own filter strategies for evidence-based practice. This article describes the development and evaluation of filters that facilitate evidence-based nursing searches. Methods: An inductive, multistep methodology was employed. A sleep search strategy was developed for uniform application to all filters for filter development and evaluation purposes. An EBN matrix was next developed as a framework to illustrate conceptually the placement of nursing-sensitive filters along two axes: horizontally, an adapted nursing process, and vertically, levels of evidence. Nursing diagnosis, patient outcomes, and primary data filters were developed recursively. Through an interface with the PubMed search engine, the EBN matrix filters were inserted into a database that executes filter searches, retrieves citations, and stores and updates retrieved citations sets hourly. For evaluation purposes, the filters were subjected to sensitivity and specificity analyses and retrieval set comparisons. Once the evaluation was complete, hyperlinks providing access to any one or a combination of completed filters to the EBN matrix were created. Subject searches on any topic may be applied to the filters, which interface with PubMed. Results: Sensitivity and specificity for the combined nursing diagnosis and primary data filter were 64% and 99%, respectively; for the patient outcomes filter, the results were 75% and 71%, respectively. Comparisons were made between the EBN matrix filters (nursing diagnosis and primary data) and PubMed's Clinical Queries (diagnosis and sensitivity) filters. Additional comparisons examined publication types and indexing differences. Review articles accounted for the majority of the publication type differences, because “review” was accepted by the CQ but was “NOT'd” by the EBN filter. Indexing comparisons revealed that although the term “nursing diagnosis” is in Medical Subject Headings (MeSH), the nursing diagnoses themselves (e.g., sleep deprivation, disturbed sleep pattern) are not indexed as nursing diagnoses. As a result, abstracts deemed to be appropriate nursing diagnosis by the EBN filter were not accepted by the CQ diagnosis filter. Conclusions: The EBN filter capture of desired articles may be enhanced by further refinement to achieve a greater degree of filter sensitivity. Retrieval set comparisons revealed publication type differences and indexing issues. The EBN matrix filter “NOT'd” out “review,” while the CQ filter did not. Indexing issues were identified that explained the retrieval of articles deemed appropriate by the EBN filter matrix but not included in the CQ retrieval. These results have MeSH definition and indexing implications as well as implications for clinical decision support in nursing practice. PMID:15685282
Multidimensional signaling via wavelet packets
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.
1995-04-01
This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.
The application of LQR synthesis techniques to the turboshaft engine control problem
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1984-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
Wilén, B M; Lumley, D; Mattsson, A; Mino, T
2006-01-01
The effect of rain events on effluent quality dynamics was studied at a full scale activated sludge wastewater treatment plant which has a process solution incorporating pre-denitrification in activated sludge with post-nitrification in trickling filters. The incoming wastewater flow varies significantly due to a combined sewer system. Changed flow conditions have an impact on the whole treatment process since the recirculation to the trickling filters is set by the hydraulic limitations of the secondary settlers. Apart from causing different hydraulic conditions in the plant, increased flow due to rain or snow-melting, changes the properties of the incoming wastewater which affects process performance and effluent quality, especially the particle removal efficiency. A comprehensive set of on-line and laboratory data were collected and analysed to assess the impact of rain events on the plant performance.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2017-05-31
Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.
Coherent broadband sonar signal processing with the environmentally corrected matched filter
NASA Astrophysics Data System (ADS)
Camin, Henry John, III
The matched filter is the standard approach for coherently processing active sonar signals, where knowledge of the transmitted waveform is used in the detection and parameter estimation of received echoes. Matched filtering broadband signals provides higher levels of range resolution and reverberation noise suppression than can be realized through narrowband processing. Since theoretical processing gains are proportional to the signal bandwidth, it is typically desirable to utilize the widest band signals possible. However, as signal bandwidth increases, so do environmental effects that tend to decrease correlation between the received echo and the transmitted waveform. This is especially true for ultra wideband signals, where the bandwidth exceeds an octave or approximately 70% fractional bandwidth. This loss of coherence often results in processing gains and range resolution much lower than theoretically predicted. Wiener filtering, commonly used in image processing to improve distorted and noisy photos, is investigated in this dissertation as an approach to correct for these environmental effects. This improved signal processing, Environmentally Corrected Matched Filter (ECMF), first uses a Wiener filter to estimate the environmental transfer function and then again to correct the received signal using this estimate. This process can be viewed as a smarter inverse or whitening filter that adjusts behavior according to the signal to noise ratio across the spectrum. Though the ECMF is independent of bandwidth, it is expected that ultra wideband signals will see the largest improvement, since they tend to be more impacted by environmental effects. The development of the ECMF and demonstration of improved parameter estimation with its use are the primary emphases in this dissertation. Additionally, several new contributions to the field of sonar signal processing made in conjunction with the development of the ECMF are described. A new, nondimensional wideband ambiguity function is presented as a way to view the behavior of the matched filter with and without the decorrelating environmental effects; a new, integrated phase broadband angle estimation method is developed and compared to existing methods; and a new, asymptotic offset phase angle variance model is presented. Several data sets are used to demonstrate these new contributions. High fidelity Sonar Simulation Toolset (SST) synthetic data is used to characterize the theoretical performance. Two in-water data sets were used to verify assumptions that were made during the development of the ECMF. Finally, a newly collected in-air data set containing ultra wideband signals was used in lieu of a cost prohibitive underwater experiment to demonstrate the effectiveness of the ECMF at improving parameter estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wildberger, Joachim Ernst, E-mail: wildberg@rad.rwth-aachen.de; Haage, Patrick; Bovelander, Jan
2005-04-15
Purpose. To evaluate the size and quantity of downstream emboli after thrombectomy using the Arrow-Trerotola Percutaneous Thrombolytic Device (PTD) with or without temporary filtration for extensive iliofemoral and iliocaval thrombi in an in vitro flow model. Methods. Iliocaval thrombi were simulated by clotted bovine blood in a flow model (semilucent silicone tubings, diameter 12-16 mm). Five experimental set-ups were performed 10 times each; thrombus particles and distribution were measured in the effluent. First, after retrograde insertion, mechanical thrombectomy was performed using the PTD alone. Then a modified self-expanding tulip-shaped temporary vena cava stent filter was inserted additionally at the beginningmore » of each declotting procedure and removed immediately after the intervention without any manipulation within or at the filter itself. In a third step, the filter was filled with thrombus only. Here, two experiments were performed: Careful closure within the flow circuit without any additional fragmentation procedure and running the PTD within the filter lumen, respectively. In the final set-up, mechanical thrombectomy was performed within the thrombus-filled tubing as well as in the filter lumen. The latter was closed at the end of the procedure and both devices were removed from the flow circuit. Results. Running the PTD in the flow circuit without filter protection led to a fragmentation of 67.9% ({+-}7.14%) of the clot into particles {<=}500 {mu}m; restoration of flow was established in all cases. Additional placement of the filter safely allowed maceration of 82.9% ({+-}5.59%) of the thrombus. Controlled closure of the thrombus-filled filter within the flow circuit without additional mechanical treatment broke up 75.2% ({+-}10.49%), while additional mechanical thrombectomy by running the PTD within the occluded filter led to dissolution of 90.4% ({+-}3.99%) of the initial clot. In the final set-up, an overall fragmentation rate of 99.6% ({+-}0.44%) was achieved. Conclusions. The combined use of the Arrow-Trerotola PTD and a temporary vena cava stent filter proved to be effective for even large clot removal in this experimental set-up.« less
Ma, Lifeng; Wang, Zidong; Lam, Hak-Keung; Kyriakoulis, Nikos
2017-11-01
In this paper, the distributed set-membership filtering problem is investigated for a class of discrete time-varying system with an event-based communication mechanism over sensor networks. The system under consideration is subject to sector-bounded nonlinearity, unknown but bounded noises and sensor saturations. Each intelligent sensing node transmits the data to its neighbors only when certain triggering condition is violated. By means of a set of recursive matrix inequalities, sufficient conditions are derived for the existence of the desired distributed event-based filter which is capable of confining the system state in certain ellipsoidal regions centered at the estimates. Within the established theoretical framework, two additional optimization problems are formulated: one is to seek the minimal ellipsoids (in the sense of matrix trace) for the best filtering performance, and the other is to maximize the triggering threshold so as to reduce the triggering frequency with satisfactory filtering performance. A numerically attractive chaos algorithm is employed to solve the optimization problems. Finally, an illustrative example is presented to demonstrate the effectiveness and applicability of the proposed algorithm.
Experiments with explicit filtering for LES using a finite-difference method
NASA Technical Reports Server (NTRS)
Lund, T. S.; Kaltenbach, H. J.
1995-01-01
The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.
Filter Strategies for Mars Science Laboratory Orbit Determination
NASA Technical Reports Server (NTRS)
Thompson, Paul F.; Gustafson, Eric D.; Kruizinga, Gerhard L.; Martin-Mur, Tomas J.
2013-01-01
The Mars Science Laboratory (MSL) spacecraft had ambitious navigation delivery and knowledge accuracy requirements for landing inside Gale Crater. Confidence in the orbit determination (OD) solutions was increased by investigating numerous filter strategies for solving the orbit determination problem. We will discuss the strategy for the different types of variations: for example, data types, data weights, solar pressure model covariance, and estimating versus considering model parameters. This process generated a set of plausible OD solutions that were compared to the baseline OD strategy. Even implausible or unrealistic results were helpful in isolating sensitivities in the OD solutions to certain model parameterizations or data types.
Online frequency estimation with applications to engine and generator sets
NASA Astrophysics Data System (ADS)
Manngård, Mikael; Böling, Jari M.
2017-07-01
Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.
Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric
2015-01-01
Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI.
Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric
2016-01-01
Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI. PMID:26834607
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using Kalman Filters to Reduce Noise from RFID Location System
Xavier, José; Reis, Luís Paulo; Petry, Marcelo
2014-01-01
Nowadays, there are many technologies that support location systems involving intrusive and nonintrusive equipment and also varying in terms of precision, range, and cost. However, the developers some time neglect the noise introduced by these systems, which prevents these systems from reaching their full potential. Focused on this problem, in this research work a comparison study between three different filters was performed in order to reduce the noise introduced by a location system based on RFID UWB technology with an associated error of approximately 18 cm. To achieve this goal, a set of experiments was devised and executed using a miniature train moving at constant velocity in a scenario with two distinct shapes—linear and oval. Also, this train was equipped with a varying number of active tags. The obtained results proved that the Kalman Filter achieved better results when compared to the other two filters. Also, this filter increases the performance of the location system by 15% and 12% for the linear and oval paths respectively, when using one tag. For a multiple tags and oval shape similar results were obtained (11–13% of improvement). PMID:24592186
Clasen, Thomas; Garcia Parra, Gloria; Boisson, Sophie; Collin, Simon
2005-10-01
Household water treatment is increasingly recognized as an effective means of reducing the burden of diarrheal disease among low-income populations without access to safe water. Oxfam GB undertook a pilot project to explore the use of household-based ceramic water filters in three remote communities in Colombia. In a randomized, controlled trial over a period of six months, the filters were associated with a 75.3% reduction in arithmetic mean thermotolerant coliforms (TTCs) (P < 0.0001). A total of 47.7% and 24.2% of the samples from the intervention group had no detectible TTCs/100 mL or conformed to World Health Organization limits for low risk (1-10 TTCs/100 mL), respectively, compared with 0.9% and 7.3% for control group samples. Overall, prevalence of diarrhea was 60% less among households using filters than among control households (odds ratio = 0.40, 95% confidence interval = 0.25, 0.63, P < 0.0001). However, the microbiologic performance and protective effect of the filters was not uniform throughout the study communities, suggesting the need to consider the circumstances of the particular setting before implementing this intervention.
An optimal search filter for retrieving systematic reviews and meta-analyses
2012-01-01
Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835
Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.
2016-01-20
The initial set of candidate hypotheses provides a useful starting point for quantitative modeling and adaptive management of the river and species. We anticipate that hypotheses will change from the set of working management hypotheses as adaptive management progresses. More importantly, hypotheses that have been filtered out of our multistep process are still being considered. These filtered hypotheses are archived and if existing hypotheses are determined to be inadequate to explain observed population dynamics, new hypotheses can be created or filtered hypotheses can be reinstated.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Comparison of filtering methods for extracellular gastric slow wave recordings.
Paskaranandavadivel, Niranchan; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2013-01-01
Extracellular recordings are used to define gastric slow wave propagation. Signal filtering is a key step in the analysis and interpretation of extracellular slow wave data; however, there is controversy and uncertainty regarding the appropriate filtering settings. This study investigated the effect of various standard filters on the morphology and measurement of extracellular gastric slow waves. Experimental extracellular gastric slow waves were recorded from the serosal surface of the stomach from pigs and humans. Four digital filters: finite impulse response filter (0.05-1 Hz); Savitzky-Golay filter (0-1.98 Hz); Bessel filter (2-100 Hz); and Butterworth filter (5-100 Hz); were applied on extracellular gastric slow wave signals to compare the changes temporally (morphology of the signal) and spectrally (signals in the frequency domain). The extracellular slow wave activity is represented in the frequency domain by a dominant frequency and its associated harmonics in diminishing power. Optimal filters apply cutoff frequencies consistent with the dominant slow wave frequency (3-5 cpm) and main harmonics (up to ≈ 2 Hz). Applying filters with cutoff frequencies above or below the dominant and harmonic frequencies was found to distort or eliminate slow wave signal content. Investigators must be cognizant of these optimal filtering practices when detecting, analyzing, and interpreting extracellular slow wave recordings. The use of frequency domain analysis is important for identifying the dominant and harmonics of the signal of interest. Capturing the dominant frequency and major harmonics of slow wave is crucial for accurate representation of slow wave activity in the time domain. Standardized filter settings should be determined. © 2012 Blackwell Publishing Ltd.
Indicator Expansion with Analysis Pipeline
2015-01-13
INTERNAL FILTER trackInfectedHosts FILTER badTraffic SIP infectedHosts 1 DAY END INTERNAL FILTER 11 Step 3 watch where infected hosts go FILTER...nonWhiteListPostInfected SIP IN LIST infectedHosts DIP NOT IN LIST safePopularIPs.set END FILTER 12 Step 4 & 5: Count Hosts Per IP and Alert EVALUATION...CHECK THRESHOLD DISTINCT SIP > 50 TIME WINDOW 36 HOURS END CHECK END EVALUATION 13 Step 6: Report Expanded Indicators LIST CONFIGURATION secondLevelIPs
Recalibrated Equations for Determining Effect of Oil Filtration on Rolling Bearing Life
NASA Technical Reports Server (NTRS)
Needelman, William M.; Zaretsky, Erwin V.
2014-01-01
In 1991, Needelman and Zaretsky presented a set of empirically derived equations for bearing fatigue life (adjustment) factors (LFs) as a function of oil filter ratings. These equations for life factors were incorporated into the reference book, "STLE Life Factors for Rolling Bearings." These equations were normalized (LF = 1) to a 10-micrometer filter rating at Beta(sub x) = 200 (normal cleanliness) as it was then defined. Over the past 20 years, these life factors based on oil filtration have been used in conjunction with ANSI/ABMA standards and bearing computer codes to predict rolling bearing life. Also, additional experimental studies have been made by other investigators into the relationship between rolling bearing life and the size, number, and type of particle contamination. During this time period filter ratings have also been revised and improved, and they now use particle counting calibrated to a new National Institute of Standards and Technology (NIST) reference material, NIST SRM 2806, 1997. This paper reviews the relevant bearing life studies and describes the new filter ratings. New filter ratings, Beta(sub x(c)) = 200 and Beta(sub x(c)) = 1000, are benchmarked to old filter ratings, Beta(sub x) = 200, and vice versa. Two separate sets of filter LF values were derived based on the new filter ratings for roller bearings and ball bearings, respectively. Filter LFs can be calculated for the new filter ratings.
All-digital GPS receiver mechanization
NASA Astrophysics Data System (ADS)
Ould, P. C.; van Wechel, R. J.
The paper describes the all-digital baseband correlation processing of GPS signals, which is characterized by (1) a potential for improved antijamming performance, (2) fast acquisition by a digital matched filter, (3) reduction of adjustment, (4) increased system reliability, and (5) provision of a basis for the realization of a high degree of VLSI potential for the development of small economical GPS sets. The basic technical approach consists of a broadband fix-tuned RF converter followed by a digitizer; digital-matched-filter acquisition section; phase- and delay-lock tracking via baseband digital correlation; software acquisition logic and loop filter implementation; and all-digital implementation of the feedback numerical controlled oscillators and code generator. Broadband in-phase and quadrature tracking is performed by an arctangent angle detector followed by a phase-unwrapping algorithm that eliminates false locks induced by sampling and data bit transitions, and yields a wide pull-in frequency range approaching one-fourth of the loop iteration frequency.
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
Copper emissions from a high volume air sampler
NASA Technical Reports Server (NTRS)
King, R. B.; Toma, J.
1975-01-01
High volume air samplers (hi vols) are described which utilize a brush-type electric motor to power the fans used for pulling air through the filter. Anomalously high copper values were attributed to removal of copper from the commutator into the air stream due to arcing of the brushes and recirculation through the filter. Duplicate hi vols were set up under three operating conditions: (1) unmodified; (2) gasketed to prevent internal recirculation; and (3) gasketed and provided with a pipe to transport the motor exhaust some 20 feet away. The results of 5 days' operation demonstrate that hi vols can suddenly start emitting increased amounts of copper with no discernible operational indication, and that recirculation and capture on the filter can take place. Copper levels found with hi vols whose exhaust was discharged at a distance downwind were among the lowest found, and apparently provides a satisfactory solution to copper contamination.
Sowa-Staszczak, Anna; Lenda-Tracz, Wioletta; Tomaszuk, Monika; Głowa, Bogusław; Hubalewska-Dydejczyk, Alicja
2013-01-01
Somatostatin receptor scintigraphy (SRS) is a useful tool in the assessment of GEP-NET (gastroenteropancreatic neuroendocrine tumor) patients. The choice of appropriate settings of image reconstruction parameters is crucial in interpretation of these images. The aim of the study was to investigate how the GEP NET lesion signal to noise ratio (TCS/TCB) depends on different reconstruction settings for Flash 3D software (Siemens). SRS results of 76 randomly selected patients with confirmed GEP-NET were analyzed. For SPECT studies the data were acquired using standard clinical settings 3-4 h after the injection of 740 MBq 99mTc-[EDDA/HYNIC] octreotate. To obtain final images the OSEM 3D Flash reconstruction with different settings and FBP reconstruction were used. First, the TCS/TCB ratio in voxels was analyzed for different combinations of the number of subsets and the number of iterations of the OSEM 3D Flash reconstruction. Secondly, the same ratio was analyzed for different parameters of the Gaussian filter (with FWHM = 2-4 times greater from the pixel size). Also the influence of scatter correction on the TCS/TCB ratio was investigated. With increasing number of subsets and iterations, the increase of TCS/TCB ratio was observed. With increasing settings of Gauss [FWHM coefficient] filter, the decrease of TCS/TCB ratio was reported. The use of scatter correction slightly decreases the values of this ratio. OSEM algorithm provides a meaningfully better reconstruction of the SRS SPECT study as compared to the FBP technique. A high number of subsets improves image quality (images are smoother). Increasing number of iterations gives a better contrast and the shapes of lesions and organs are sharper. The choice of reconstruction parameters is a compromise between image qualitative appearance and its quantitative accuracy and should not be modified when comparing multiple studies of the same patient.
Electromechanical Frequency Filters
NASA Astrophysics Data System (ADS)
Wersing, W.; Lubitz, K.
Frequency filters select signals with a frequency inside a definite frequency range or band from signals outside this band, traditionally afforded by a combination of L-C-resonators. The fundamental principle of all modern frequency filters is the constructive interference of travelling waves. If a filter is set up of coupled resonators, this interference occurs as a result of the successive wave reflection at the resonators' ends. In this case, the center frequency f c of a filter, e.g., set up of symmetrical λ/2-resonators of length 1, is given by f_c = f_r = v_{ph}/λ = v_{ph}/2l , where v ph is the phase velocity of the wave. This clearly shows the big advantage of acoustic waves for filter applications in comparison to electro-magnetic waves. Because v ph of acoustic waves in solids is about 104-105 smaller than that of electro-magnetic waves, much smaller filters can be realised. Today, piezoelectric materials and processing technologies exist that electromechanical resonators and filters can be produced in the frequency range from 1 kHz up to 10 GHz. Further requirements for frequency filters such as low losses (high resonator Q) and low temperature coefficients of frequency constants can also be fulfilled with these filters. Important examples are quartz-crystal resonators and filters (1 kHz-200 MHz) as discussed in Chap. 2, electromechanical channel filters (50 kHz and 130 kHz) for long-haul communication systems as discussed in this section, surface acoustic wave (SAW) filters (20 MHz-5 GHz), as discussed in Chap. 14, and thin film bulk acoustic resonators (FBAR) and filters (500 MHz-10 GHz), as discussed in Chap. 15.
A dense grid of narrow bandpass steep edge filters for the JST/T250 telescope: summary of results
NASA Astrophysics Data System (ADS)
Brauneck, U.; Sprengard, R.; Bourquin, S.; Marín-Franch, A.
2017-09-01
On the Javalambre mountain in Spain, the Centro de Estudios de Fisica del Cosmos de Aragon (CEFCA) has setup a new wide field telescope, the JST/T250: a 2.55 m telescope with a plate scale of 22.67"/mm and a 3° diameter field of view. To conduct a photometric sky survey, a large format mosaic camera made of 14 individual CCDs is used in combination with filter trays containing 14 filters each of theses 101.7 x 96.5 mm in size. For this instrument, SCHOTT manufactured 56 specially designed steep edged bandpass interference filters which were recently completed. The filter set consists of bandpass filters in the range between 348,5 nm and 910 nm and a longpass filter at 915 nm. Most of the filters have FWHM of 14.5 nm and a blocking between 250 and 1050 nm with optical density of OD5. Absorptive color glass substrates in combination with interference filters were used to minimize residual reflection in order to avoid ghost images. Inspite of containing absorptive elements, the filters show the maximum possible transmission. This was achieved by using magnetron sputtering for the filter coating process. The most important requirement for the continuous photometric survey is the tight tolerancing of the central wavelengths and FWHM of the filters. This insures each bandpass having a defined overlap with its neighbors. In addition, the blocking of the filters is better than OD5 in the range 250-1050 nm. A high image quality required a low transmitted wavefront error (4 locally and 2 on the whole aperture) which was achieved even by combining 2 or 3 substrates. We report on the spectral and interferometric results measured on the whole set of filters. λλ
Spectral filters for laser communications
NASA Technical Reports Server (NTRS)
Shaik, K.
1991-01-01
Optical communication systems must perform reliabily under strong background light interference. Since the transmitting lasers operate within a narrow spectral band, high signal to noise ratios can be achieved when narrowband spectral optical filters can be used to reject out of band light. Here, a set of general requirements for such filters are developed, and an overview is given of suitable spectral filter technologies for optical communication systems.
An Integrated approach to the Space Situational Awareness Problem
2016-12-15
data coming from the sensors. We developed particle-based Gaussian Mixture Filters that are immune to the “curse of dimensionality”/ “particle...depletion” problem inherent in particle filtering . This method maps the data assimilation/ filtering problem into an unsupervised learning problem. Results...Gaussian Mixture Filters ; particle depletion; Finite Set Statistics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 1
NASA Technical Reports Server (NTRS)
Freedman, A. P.; Steppe, J. A.
1995-01-01
The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.
Assessing mass change trends in GRACE models
NASA Astrophysics Data System (ADS)
Siemes, C.; Liu, X.; Ditmar, P.; Revtova, E.; Slobbe, C.; Klees, R.; Zhao, Q.
2009-04-01
The DEOS Mass Transport model, release 1 (DMT-1), has been recently presented to the scientific community. The model is based on GRACE data and consists of sets of spherical harmonic coefficients to degree 120, which are estimated once per month. Currently, the DMT-1 model covers the time span from Feb. 2003 to Dec. 2006. The high spatial resolution of the model could be achieved by applying a statistically optimal Wiener-type filter, which is superior to standard filtering techniques. The optimal Wiener-type filter is a regularization-type filter which makes full use of the variance/covariance matrices of the sets of spherical harmonic coefficients. It can be shown that applying this filter is equivalent to introducing an additional set of observations: Each set of spherical harmonic coefficients is assumed to be zero. The variance/covariance matrix of this information is chosen according to the signal contained within the sets of spherical harmonic coefficients, expressed in terms of equivalent water layer thickness in the spatial domain, with respect to its variations in time. It will be demonstrated that DMT-1 provides a much better localization and more realistic amplitudes than alternative filtered models. In particular, we will consider a lower maximum degree of the spherical harmonic expansion (e.g. 70), as well as standard filters like an isotropic Gaussian filter. For the sake of a fair comparison, we will use the same GRACE observations as well as the same method for the inversion of the observations to obtain the alternative filtered models. For the inversion method, we will choose the three-point range combination approach. Thus, we will compare four different models: (1) GRACE solution with maximum degree 120, filtered by optimal Wiener-type filter (the DMT-1 model) (2) GRACE solution with maximum degree 120, filtered by standard filter (3) GRACE solution with maximum degree 70, filtered by optimal Wiener-type filter (4) GRACE solution with maximum degree 70, filtered by standard filter Within the comparison, we will focus on the amplitude of long-term mass change signals with respect to spatial resolution. The challenge for the recovery of such signals from GRACE based solutions results from the fact that the solutions must be filtered and that filtering of always smoothes not only noise, but also to some extend signal. Since the observation density is much higher near the poles than at the equator, which is due to the orbits of the GRACE satellites, we expect that the magnitude of estimated mass change signals in polar areas is less underestimated than in equatorial areas. For this reason will investigate trends at locations in equatorial areas as well as trends at locations in polar areas. In particular, we will investigate Lake Victoria, Lake Malawi and Lake Tanganyika, which are all located in Eastern Africa, near to the equator. Furthermore, we will show trends of two locations at the South-East coast of Greenland, Abbot Ice-Shelf and Marie-Byrd-Land in Antarctica For validation, we use water level variations in Lake Victoria (69000 km2), Lake Malawi (29000 km2) and Lake Tanganyika (33000 km2) as ground truth. The water level, which is measured by satellite radar altimetry, decreases at a rate of approximately 47 cm in Lake Victoria, 42 cm in Lake Malawi and 30 cm in Lake Tanganyika over the period from Feb. 2003 to Dec. 2006. Because all three lakes are located in tropical and subtropical clime, the mass change signal will consist of large seasonal variations in addition to the trend component we are interested in. However, also the amplitude of estimated seasonal variations can be used as an indicator of the quality of the models within the comparison. Since the lakes' areas are at the edge of the spatial resolution GRACE data can provide, they are a good example of the advantages of high-resolution mass change models like DMT-1.
On-sky characterisation of the VISTA NB118 narrow-band filters at 1.19 μm
NASA Astrophysics Data System (ADS)
Milvang-Jensen, Bo; Freudling, Wolfram; Zabl, Johannes; Fynbo, Johan P. U.; Møller, Palle; Nilsson, Kim K.; McCracken, Henry Joy; Hjorth, Jens; Le Fèvre, Olivier; Tasca, Lidia; Dunlop, James S.; Sobral, David
2013-12-01
Observations of the high redshift Universe through narrow-band filters have proven very successful in the last decade. The 4-m VISTA telescope, equipped with the wide-field camera VIRCAM, offers a major step forward in wide-field near-infrared imaging, and in order to utilise VISTA's large field-of-view and sensitivity, the Dark Cosmology Centre provided a set of 16 narrow-band filters for VIRCAM. These NB118 filters are centered at a wavelength near 1.19 μm in a region with few airglow emission lines. The filters allow the detection of Hα emitters at z = 0.8, Hβ and [O iii] emitters at z ≈ 1.4, [O ii] emitters at z = 2.2, and Lyα emitters at z = 8.8. Based on guaranteed time observations of the COSMOS field we here present a detailed description and characterization of the filters and their performance. In particular we provide sky-brightness levels and depths for each of the 16 detector/filter sets and find that some of the filters show signs of some red-leak. We identify a sample of 2 × 103 candidate emission-line objects in the data. Cross-correlating this sample with a large set of galaxies with known spectroscopic redshifts we determine the "in situ" passbands of the filters and find that they are shifted by about 3.5 - 4 nm (corresponding to 30% of the filter width) to the red compared to the expectation based on the laboratory measurements. Finally, we present an algorithm to mask out persistence in VIRCAM data. Scientific results extracted from the data will be presented separately. Based on observations collected at the European Southern Observatory, Chile, as part of programme 284.A-5026 (VISTA NB118 GTO, PI Fynbo) and 179.A-2005 (UltraVISTA, PIs Dunlop, Franx, Fynbo, & Le Fèvre).
A Photometric Observing Program at the VATT: Setting Up a Calibration Field
NASA Astrophysics Data System (ADS)
Davis Philip, A. G.; Boyle, R. P.; Janusz, R.
2009-05-01
Philip and Boyle have been making Strömgren and then Strömvil photometric observations of open and globular clusters at the Vatican Advanced Technology Telescope located on Mt. Graham in Arizona. Our aim is to obtain CCD photometric indices good to 0.01 magnitude. Indices of this quality can later be analyzed to yield estimates of temperature, luminosity and metallicity. But we have found that the CCD chip does not yield photometry of this quality without further corrections. Our most observed cluster is the open cluster, M 67. This cluster is also very well observed in the literature. We took the best published values and created a set of "standard" stars for our field. Taking our CCD results we could calculate deltas, as a function of position on the chip, which we then applied to all the CCD frames that we obtained. With this procedure we were able to obtain the precision of 0.01 magnitudes in all the fields that we observed. When we started we were able to use the "A" two-inch square Strömgren four-color set from KPNO. Later the Vatican Observatory bought a set of 3.48 inch square Strömgren filters, The Vatican Observatory had a set of circular Vilnius filters There was also an X filter. These eight filters made our Strömvil set.
Beam energy tracking system on Optima XEx high energy ion implanter
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, Jonathan; Satoh, Shu; Wu Xiangyang
2012-11-06
The Axcelis Optima XEx high energy implanter is an RF linac-based implanter with 12 RF resonators for beam acceleration. Even though each acceleration field is an alternating, sinusoidal RF field, the well known phase-focusing principle produces a beam with a sharp quasi-monoenergetic energy spectrum. A magnetic energy filter after the linac further attenuates the low energy continuum in the energy spectrum often associated with RF acceleration. The final beam energy is a function of the phase and amplitude of the 12 resonators in the linac. When tuning a beam, the magnetic energy filter is set to the desired energy, andmore » each linac parameter is tuned to maximize the transmission through the filter. Once a beam is set up, all the parameters are stored in a recipe, which can be easily tuned and has proven to be quite repeatable. The magnetic field setting of the energy filter selects the beam energy from the RF Linac accelerator, and in-situ verification of beam energy in addition to the magnetic energy filter setting has long been desired. An independent energy tracking system was developed for this purpose, using the existing electrostatic beam scanner as a deflector to construct an in-situ electrostatic energy analyzer. This paper will describe the system and performance of the beam energy tracking system.« less
PERFORMANCE OF OVID MEDLINE SEARCH FILTERS TO IDENTIFY HEALTH STATE UTILITY STUDIES.
Arber, Mick; Garcia, Sonia; Veale, Thomas; Edwards, Mary; Shaw, Alison; Glanville, Julie M
2017-01-01
This study was designed to assess the sensitivity of three Ovid MEDLINE search filters developed to identify studies reporting health state utility values (HSUVs), to improve the performance of the best performing filter, and to validate resulting search filters. Three quasi-gold standard sets (QGS1, QGS2, QGS3) of relevant studies were harvested from reviews of studies reporting HSUVs. The performance of three initial filters was assessed by measuring their relative recall of studies in QGS1. The best performing filter was then developed further using QGS2. This resulted in three final search filters (FSF1, FSF2, and FSF3), which were validated using QGS3. FSF1 (sensitivity maximizing) retrieved 132/139 records (sensitivity: 95 percent) in the QGS3 validation set. FSF1 had a number needed to read (NNR) of 842. FSF2 (balancing sensitivity and precision) retrieved 128/139 records (sensitivity: 92 percent) with a NNR of 502. FSF3 (precision maximizing) retrieved 123/139 records (sensitivity: 88 percent) with a NNR of 383. We have developed and validated a search filter (FSF1) to identify studies reporting HSUVs with high sensitivity (95 percent) and two other search filters (FSF2 and FSF3) with reasonably high sensitivity (92 percent and 88 percent) but greater precision, resulting in a lower NNR. These seem to be the first validated filters available for HSUVs. The availability of filters with a range of sensitivity and precision options enables researchers to choose the filter which is most appropriate to the resources available for their specific research.
An RC active filter design handbook
NASA Technical Reports Server (NTRS)
Deboo, G. J.
1977-01-01
The design of filters is described. Emphasis is placed on simplified procedures that can be used by the reader who has minimum knowledge about circuit design and little acquaintance with filter theory. The handbook has three main parts. The first part is a review of some information that is essential for work with filters. The second part includes design information for specific types of filter circuitry and describes simple procedures for obtaining the component values for a filter that will have a desired set of characteristics. Pertinent information relating to actual performance is given. The third part (appendix) is a review of certain topics in filter theory and is intended to provide some basic understanding of how filters are designed.
Assessment of a membrane drinking water filter in an emergency setting.
Ensink, Jeroen H J; Bastable, Andy; Cairncross, Sandy
2015-06-01
The performance and acceptability of the Nerox(TM) membrane drinking water filter were evaluated among an internally displaced population in Pakistan. The membrane filter and a control ceramic candle filter were distributed to over 3,000 households. Following a 6-month period, 230 households were visited and filter performance and use were assessed. Only 6% of the visited households still had a functioning filter, and the removal performance ranged from 80 to 93%. High turbidity in source water (irrigation canals), together with high temperatures and large family size were likely to have contributed to poor performance and uptake of the filters.
Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles
2015-01-01
Introduction Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Methods Consensus on trauma care audit filters was built between twenty panelists using a Delphi technique with four anonymous, iterative surveys designed to elicit: i) trauma care processes to be measured; ii) important features of audit filters for the district-level hospital setting; and iii) potentially useful filters. Filters were ranked on a scale from 0 – 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Results Panelists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1 - 0.58; Round 2 - 0.66; Round 3 - 0.76; and Round 4 - 0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage - vital signs are recorded within 15 minutes of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation - a large bore IV was placed within 15 minutes of patient arrival; referral - if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. Conclusion This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step toward improving care for the injured at district-level hospitals in LMICs. PMID:26492882
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Günther Tulip inferior vena cava filter retrieval using a bidirectional loop-snare technique.
Ross, Jordan; Allison, Stephen; Vaidya, Sandeep; Monroe, Eric
2016-01-01
Many advanced techniques have been reported in the literature for difficult Günther Tulip filter removal. This report describes a bidirectional loop-snare technique in the setting of a fibrin scar formation around the filter leg anchors. The bidirectional loop-snare technique allows for maximal axial tension and alignment for stripping fibrin scar from the filter legs, a commonly encountered complication of prolonged dwell times.
Calibrating the PAU Survey's 46 Filters
NASA Astrophysics Data System (ADS)
Bauer, A.; Castander, F.; Gaztañaga, E.; Serrano, S.; Sevilla, N.; Tonello, N.; PAU Team
2016-05-01
The Physics of the Accelerating Universe (PAU) Survey, being carried out by several Spanish institutions, will image an area of 100-200 square degrees in 6 broad and 40 narrow band optical filters. The team is building a camera (PAUCam) with 18 CCDs, which will be installed in the 4 meter William Herschel Telescope at La Palma in 2013. The narrow band filters will each cover 100Å, with the set spanning 4500-8500Å. The broad band set will consist of standard ugriZy filters. The narrow band filters will provide low-resolution (R˜50) photometric "spectra" for all objects observed in the survey, which will reach a depth of ˜24 mag in the broad bands and ˜22.5 mag (AB) in the narrow bands. Such precision will allow for galaxy photometric redshift errors of 0.0035(1+z), which will facilitate the measurement of cosmological parameters with precision comparable to much larger spectroscopic and photometric surveys. Accurate photometric calibration of the PAU data is vital to the survey's science goals, and is not straightforward due to the large and unusual filter set. We outline the data management pipelines being developed for the survey, both for nightly data reduction and coaddition of multiple epochs, with emphasis on the photometric calibration strategies. We also describe the tools we are developing to test the quality of the reduction and calibration.
Low-Dose Contrast-Enhanced Breast CT Using Spectral Shaping Filters: An Experimental Study.
Makeev, Andrey; Glick, Stephen J
2017-12-01
Iodinated contrast-enhanced X-ray imaging of the breast has been studied with various modalities, including full-field digital mammography (FFDM), digital breast tomosynthesis (DBT), and dedicated breast CT. Contrast imaging with breast CT has a number of advantages over FFDM and DBT, including the lack of breast compression, and generation of fully isotropic 3-D reconstructions. Nonetheless, for breast CT to be considered as a viable tool for routine clinical use, it would be desirable to reduce radiation dose. One approach for dose reduction in breast CT is spectral shaping using X-ray filters. In this paper, two high atomic number filter materials are studied, namely, gadolinium (Gd) and erbium (Er), and compared with Al and Cu filters currently used in breast CT systems. Task-based performance is assessed by imaging a cylindrical poly(methyl methacrylate) phantom with iodine inserts on a benchtop breast CT system that emulates clinical breast CT. To evaluate detectability, a channelized hoteling observer (CHO) is used with sums of Laguerre-Gauss channels. It was observed that spectral shaping using Er and Gd filters substantially increased the dose efficiency (defined as signal-to-noise ratio of the CHO divided by mean glandular dose) as compared with kilovolt peak and filter settings used in commercial and prototype breast CT systems. These experimental phantom study results are encouraging for reducing dose of breast CT, however, further evaluation involving patients is needed.
GOES Sounder Instrument - NOAA Satellite Information System (NOAASIS);
ground-based, balloon system. The Sounder has 4 sets of detectors (visible, long wave IR, medium wave IR , short wave IR). The incoming radiation passes through a set of filters before reaching the detectors concentric rings, one for each IR detector group. The outer ring contains 7 long wave filters, the middle
Make Your Own Transpiring Tree
ERIC Educational Resources Information Center
Martinez Vilalta, Jordi; Sauret, Miquel; Duro, Alicia; Pinol, Josep
2003-01-01
In this paper we present a simple set-up that illustrates the mechanism of sap ascent in plants and demonstrates that it can easily draw water up to heights of a few meters. The set-up consists of a tube with the lower end submerged in water and the upper one connected to a filter supported by a standard filter-holder. The evaporation of water…
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
A Comparison of Nonlinear Filters for Orbit Determination and Estimation
1986-06-01
Com- mand uses a nonlinear least squares filter for element set maintenance for all objects orbiting the Earth (3). These objects, including active...initial state vector is the singularly averaged classical orbital element set provided by SPACECOM/DOA. The state vector in this research consists of...GSF (G) - - 26.0 36.7 GSF(A) 32.1 77.4 38.8 59.6 The Air Force Space Command is responsible for main- taining current orbital element sets for about
An Investigation Into Low Fuel Pressure Warnings on a Macchi-Viper Aircraft
1988-05-01
was sufficient To activate the low pressure warning light. The pressure switch is normally set to a differential of between 2.5 - 3 psi. Partial...only a 2.1 psig margin for light illumination, if the pressure switch is set at 3 psig, and gives little scope for extra pipe or filter losses when... pressure switch is set between 2.5 - 3 psig. Any untoward pressure resistance in the fuel delivery line and filtering system would soon erode this
System for information discovery
Pennock, Kelly A [Richland, WA; Miller, Nancy E [Kennewick, WA
2002-11-19
A sequence of word filters are used to eliminate terms in the database which do not discriminate document content, resulting in a filtered word set and a topic word set whose members are highly predictive of content. These two word sets are then formed into a two dimensional matrix with matrix entries calculated as the conditional probability that a document will contain a word in a row given that it contains the word in a column. The matrix representation allows the resultant vectors to be utilized to interpret document contents.
Fourier spatial frequency analysis for image classification: training the training set
NASA Astrophysics Data System (ADS)
Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart
2016-04-01
The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
Kalman filter based control for Adaptive Optics
NASA Astrophysics Data System (ADS)
Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry
2004-12-01
Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.
Limited retention of micro-organisms using commercialized needle filters.
Elbaz, W; McCarthy, G; Mawhinney, T; Goldsmith, C E; Moore, J E
2015-03-01
A study was undertaken to compare a commercialized needle filter with a 0.2-μm filtered epidural set and a non-filtered standard needle. No culturable bacteria were detected following filtration through the 0.2-μm filter. Bacterial breakthrough was observed with the filtered needle (pore size 5 μm) and the non-filtered needle. Filtered systems (0.2 μm) should be employed to achieve total bacterial retention. This highlights that filtration systems with different pore sizes will have varying ability to retain bacteria. Healthcare professionals need to know what type/capability of filter is implied on labels used by manufacturers, and to assess whether the specification has the desired functionality to prevent bacterial translocation through needles. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
UV filters for lighting of plants
NASA Astrophysics Data System (ADS)
Doehring, T.; Koefferlein, M.; Thiel, S.; Seidlitz, H. K.; Payer, H. D.
1994-03-01
The wavelength dependent interaction of biological systems with radiation is commonly described by appropriate action spectra. Particularly effective plant responses are obtained for ultraviolet (UV) radiation. Excess shortwave UV-B radiation will induce genetic defects and plant damage. Besides the ecological discussion of the deleterious effects of the excess UV radiation there is increasing interest in horticultural applications of this spectral region. Several metabolic pathways leading to valuable secondary plant products like colors, odors, taste, or resulting in mechanical strength and vitality are triggered by UV radiation. Thus, in ecologically as well as in economically oriented experiments the exact generation and knowledge of the spectral irradiance, particularly near the UV absorption edge, is essential. The ideal filter 'material' to control the UV absorption edge would be ozone itself. However, due to problems in controlling the toxic and chemically aggressive, instable gas, only rather 'small ozone filters' have been realized so far. In artificial plant lighting conventional solid filter materials such as glass sheets and plastic foils (celluloseacetate or cellulosetriacetate) which can be easily handled have been used to absorb the UV-C and the excess shortwave UV-B radiation of the lamp emissions. Different filter glasses are available which provide absorption properties suitable for gradual changes of the spectral UV-B illumination of artificial lighting. Using a distinct set of lamps and filter glasses an acceptable simulation of the UV-B part of natural global radiation can be achieved. The aging of these and other filter materials under the extreme UV radiation in the lamphouse of a solar simulator is presently unavoidable. This instability can be dealt with only by a precise spectral monitoring and by replacing the filters accordingly. For this reason attempts would be useful to develop real ozone filters which can replace glass filters. In any case chamber experiments require a careful selection of the filter material used and must be accompanied by a continuous UV-B monitoring.
UV filters for lighting of plants
NASA Technical Reports Server (NTRS)
Doehring, T.; Koefferlein, M.; Thiel, S.; Seidlitz, H. K.; Payer, H. D.
1994-01-01
The wavelength dependent interaction of biological systems with radiation is commonly described by appropriate action spectra. Particularly effective plant responses are obtained for ultraviolet (UV) radiation. Excess shortwave UV-B radiation will induce genetic defects and plant damage. Besides the ecological discussion of the deleterious effects of the excess UV radiation there is increasing interest in horticultural applications of this spectral region. Several metabolic pathways leading to valuable secondary plant products like colors, odors, taste, or resulting in mechanical strength and vitality are triggered by UV radiation. Thus, in ecologically as well as in economically oriented experiments the exact generation and knowledge of the spectral irradiance, particularly near the UV absorption edge, is essential. The ideal filter 'material' to control the UV absorption edge would be ozone itself. However, due to problems in controlling the toxic and chemically aggressive, instable gas, only rather 'small ozone filters' have been realized so far. In artificial plant lighting conventional solid filter materials such as glass sheets and plastic foils (celluloseacetate or cellulosetriacetate) which can be easily handled have been used to absorb the UV-C and the excess shortwave UV-B radiation of the lamp emissions. Different filter glasses are available which provide absorption properties suitable for gradual changes of the spectral UV-B illumination of artificial lighting. Using a distinct set of lamps and filter glasses an acceptable simulation of the UV-B part of natural global radiation can be achieved. The aging of these and other filter materials under the extreme UV radiation in the lamphouse of a solar simulator is presently unavoidable. This instability can be dealt with only by a precise spectral monitoring and by replacing the filters accordingly. For this reason attempts would be useful to develop real ozone filters which can replace glass filters. In any case chamber experiments require a careful selection of the filter material used and must be accompanied by a continuous UV-B monitoring.
Makary, Mina S; Kapke, Jordan; Yildiz, Vedat; Pan, Xueliang; Dowell, Joshua D
2018-02-01
To compare the outcomes and costs of inferior vena cava (IVC) filter placement and retrieval in the interventional radiology (IR) and surgical departments at a tertiary-care center. Retrospective review was performed of 142 sequential outpatient IVC filter placements and 244 retrievals performed in the IR suite and operating room (OR) from 2013 to 2016. Patient demographic data, procedural characteristics, outcomes, and direct costs were compared between cohorts. Technical success rates of 100% were achieved for both IR and OR filter placements, and 98% of filters were successfully retrieved by IR means, compared with 83% in the OR (P < .01). Fluoroscopy time was similar for IR and OR filter insertions, but IR retrievals required half the fluoroscopy time, with an average of 9 minutes vs 18 minutes in the OR (P = .02). There was no significant difference between cohorts in the incidences of complications for filter retrievals, but more postprocedural complications were observed for OR placements (8%) vs IR placements (1%; P = .05). The most severe complication occurred during an OR filter retrieval, resulting in entanglement of the snare device and conversion to an emergent open filter removal by vascular surgery. Direct costs were approximately 20% higher for OR vs IR IVC filter placements ($2,246 vs $2,671; P = .01). Filter placements are equally successfully performed in IR and OR settings, but OR patients experienced significantly higher postprocedural complication rates and incurred higher costs. In contrast, higher technical success rates and shorter fluoroscopy times were observed for IR filter retrievals compared with those performed in the OR. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Picking Deep Filter Responses for Fine-Grained Image Recognition (Open Access Author’s Manuscript)
2016-12-16
stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive... filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new...positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher
Robust Controller for Turbulent and Convective Boundary Layers
2006-08-01
filter and an optimal regulator. The Kalman filter equation and the optimal regulator equation corresponding to the state-space equations, (2.20), are...separate steady-state algebraic Riccati equations. The Kalman filter is used here as a state observer rather than as an estimator since no noises are...2001) which will not be repeated here. For robustness, in the design, the Kalman filter input matrix G has been set equal to the control input
Fang, Joyce; Savransky, Dmitry
2016-08-01
Automation of alignment tasks can provide improved efficiency and greatly increase the flexibility of an optical system. Current optical systems with automated alignment capabilities are typically designed to include a dedicated wavefront sensor. Here, we demonstrate a self-aligning method for a reconfigurable system using only focal plane images. We define a two lens optical system with 8 degrees of freedom. Images are simulated given misalignment parameters using ZEMAX software. We perform a principal component analysis on the simulated data set to obtain Karhunen-Loève modes, which form the basis set whose weights are the system measurements. A model function, which maps the state to the measurement, is learned using nonlinear least-squares fitting and serves as the measurement function for the nonlinear estimator (extended and unscented Kalman filters) used to calculate control inputs to align the system. We present and discuss simulated and experimental results of the full system in operation.
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1985-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
Scoping Planning Agents With Shared Models
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy D.; Jonsson, Ari K.; McGann, Conor
2003-01-01
In this paper we provide a formal framework to define the scope of planning agents based on a single declarative model. Having multiple agents sharing a single model provides numerous advantages that lead to reduced development costs and increase reliability of the system. We formally define planning in terms of extensions of an initial partial plan, and a set of flaws that make the plan unacceptable. A Flaw Filter (FF) allows us to identify those flaws relevant to an agent. Flaw filters motivate the Plan Identification Function (PIF), which specifies when an agent is is ready hand control to another agent for further work. PIFs define a set of plan extensions that can be generated from a model and a plan request. FFs and PIFs can be used to define the scope of agents without changing the model. We describe an implementation of PIFsand FFswithin the context of EUROPA, a constraint-based planning architecture, and show how it can be used to easily design many different agents.
Robust Lee local statistic filter for removal of mixed multiplicative and impulse noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Astola, Jaakko T.
2004-05-01
A robust version of Lee local statistic filter able to effectively suppress the mixed multiplicative and impulse noise in images is proposed. The performance of the proposed modification is studied for a set of test images, several values of multiplicative noise variance, Gaussian and Rayleigh probability density functions of speckle, and different characteris-tics of impulse noise. The advantages of the designed filter in comparison to the conventional Lee local statistic filter and some other filters able to cope with mixed multiplicative+impulse noise are demonstrated.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Robinson, Gilpin R.; Menzie, W. David
2012-01-01
One implication of the economic filter results for undiscovered copper resources is that global copper supply will continue to be dominated by production from a small number of giant deposits. This domination of resource supply by a small number of producers may increase in the future, because an increasing proportion of new deposit discoveries are likely to occur in remote areas and be concealed deep beneath covering rock and sediments. Extensive mineral exploration activity will be required to meet future resource demand, because these deposits will be harder to find and more costly to mine than near-surface deposits located in more accessible areas. Relatively few of the new deposit discoveries in these high-cost settings will have sufficient tonnage and grade characteristics to assure positive economic returns on development and exploration costs.
Broken Chains: The Effect of Ocean Acidification on Bivalve and Echinoid Development
NASA Astrophysics Data System (ADS)
Richardson, K.
2016-12-01
Global warming is one of the most urgent issues facing the interconnected systems of our planet. One important impact of global warming is ocean acidification, which is a change in the pH of the oceans due to increased levels of carbon dioxide in the atmosphere. This can harm ocean life in many ways, including the disintegration of reef structures and the weakening of many types of sea animals' shells. The purpose of this project is to assess the efficacy of a novel method of raising the pH of increasingly acidic ocean waters. The experiment was set up with water of varying pH levels. There were three different experiment groups, including current ocean water (pH 8.1), increased acidity ocean water (pH 7.5), and an increased acidity ocean water with an activated carbon filter (pH 7.5). Six bivalve shells were placed in each solution . Mass loss data was taken from bivalve shells every three days over the course of thirty days (for a total of ten measurements). I hypothesized that the carbon filter would improve the pH of the ocean water (by raising the pH from 7.5) to that of normal ocean water (pH 8.1). The data showed that while the acidic ocean water shell's weight decreased (by 13%), the acidic water with the filter and current ocean water decreased by 0.3% and 0.5%, respectively. Overall, the activated carbon filter decreased the amount of weight change from the acidic water. The data is applicable to helping solve ocean acidification - activated charcoal greatly improved the effects of very acidic ocean water, which could be used in the future to help offset the impact of ocean acidification on its creatures.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10(-6). Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10−6. Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful. PMID:22195129
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
Okumura, Miwa; Ota, Takamasa; Kainuma, Kazuhisa; Sayre, James W.; McNitt-Gray, Michael; Katada, Kazuhiro
2008-01-01
Objective. For the multislice CT (MSCT) systems with a larger number of detector rows, it is essential to employ dose-reduction techniques. As reported in previous studies, edge-preserving adaptive image filters, which selectively eliminate only the noise elements that are increased when the radiation dose is reduced without affecting the sharpness of images, have been developed. In the present study, we employed receiver operating characteristic (ROC) analysis to assess the effects of the quantum denoising system (QDS), which is an edge-preserving adaptive filter that we have developed, on low-contrast resolution, and to evaluate to what degree the radiation dose can be reduced while maintaining acceptable low-contrast resolution. Materials and Methods. The low-contrast phantoms (Catphan 412) were scanned at various tube current settings, and ROC analysis was then performed for the groups of images obtained with/without the use of QDS at each tube current to determine whether or not a target could be identified. The tube current settings for which the area under the ROC curve (Az value) was approximately 0.7 were determined for both groups of images with/without the use of QDS. Then, the radiation dose reduction ratio when QDS was used was calculated by converting the determined tube current to the radiation dose. Results. The use of the QDS edge-preserving adaptive image filter allowed the radiation dose to be reduced by up to 38%. Conclusion. The QDS was found to be useful for reducing the radiation dose without affecting the low-contrast resolution in MSCT studies. PMID:19043565
ERIC Educational Resources Information Center
Stuart, Andrew; Yang, Edward Y.
1994-01-01
Simultaneous 3- channel recorded auditory brainstem responses (ABR) were obtained from 20 neonates with various high-pass filter settings and low intensity levels. Results support the advocacy of less restrictive high-pass filtering for neonatal and infant ABR screening to air-conducted and bone-conducted clicks. (Author/JDD)
Control of excitation in the fluorescence microscope.
Lea, D J; Ward, D J
1979-01-01
In fluorescence microscopy image brightness and contrast and the rate of fading depend upon the intensity of illumination of the specimen. An iris diaphragm or neutral density filters may be used to reduce fluorescence excitation. Also the excitation bandwidth may be varied by using a broad band exciter filter with a set of interchangeable yellow glass filters at the lamphouse.
Cho, Hyun-Woo; Yoon, Chung-Sik; Lee, Jin-Ho; Lee, Seung-Joo; Viner, Andrew; Johnson, Erik W
2011-07-01
Respirators are used to help reduce exposure to a variety of contaminants in workplaces. Test aerosols used for certification of particulate respirators (PRs) include sodium chloride (NaCl), dioctyl phthalate, and paraffin oil. These aerosols are generally assumed to be worst case surrogates for aerosols found in the workplace. No data have been published to date on the performance of PRs with welding fumes, a hazardous aerosol that exists in real workplace settings. The aim of this study was to compare the performance of respirators and filters against a NaCl aerosol and a welding fume aerosol and determine whether or not a correlation between the two could be made. Fifteen commercial PRs and filters (seven filtering facepiece, two replaceable single-type filters, and six replaceable dual-type filters) were chosen for investigation. Four of the filtering facepiece respirators, one of the single-type filters, and all of the dual-type filters contained carbon to help reduce exposure to ozone and other vapors generated during the welding process. For the NaCl test, a modified National Institute for Occupational Safety and Health protocol was adopted for use with the TSI Model 8130 automated filter tester. For the welding fume test, welding fumes from mild steel flux-cored arcs were generated and measured with a SIBATA filter tester (AP-634A, Japan) and a manometer in the upstream and downstream sections of the test chamber. Size distributions of the two aerosols were measured using a scanning mobility particle sizer. Penetration and pressure drop were measured over a period of aerosol loading onto the respirator or filter. Photos and scanning electron microscope images of clean and exposed respirators were taken. The count median diameter (CMD) and mass median diameter (MMD) for the NaCl aerosol were smaller than the welding fumes (CMD: 74 versus 216 nm; MMD: 198 versus 528 nm, respectively). Initial penetration and peak penetration were higher with the NaCl aerosol. However, pressure drop increased much more rapidly in the welding fume test than the NaCl aerosol test. The data and images clearly show differences in performance trends between respirator models. Therefore, general correlations between NaCl and weld fume data could not be made. These findings suggest that respirators certified with a surrogate test aerosol such as NaCl are appropriate for filtering welding fume (based on penetration). However, some respirators may have a more rapid increase in pressure drop from the welding fume accumulating on the filter. Therefore, welders will need to choose which models are easier to breathe through for the duration of their use and replace respirators or filters according to the user instructions and local regulations.
von Bary, Christian; Fredersdorf-Hahn, Sabine; Heinicke, Norbert; Jungbauer, Carsten; Schmid, Peter; Riegger, Günter A; Weber, Stefan
2011-08-01
Recently, new catheter technologies have been developed for atrial fibrillation (AF) ablation. We investigate the diagnostic accuracy of a circular mapping and pulmonary vein ablation catheter (PVAC) compared with a standard circular mapping catheter (Orbiter) and the influence of filter settings on signal quality. After reconstruction of the left atrium by three-dimensional atriography, baseline PV potentials (PVP) were recorded consecutively with PVAC and Orbiter in 20 patients with paroxysmal AF. PVPs were compared and attributed to predefined anatomical PV segments. Ablation was performed in 80 PVs using the PVAC. If isolation of the PVs was assumed, signal assessment of each PV was repeated with the Orbiter. If residual PV potentials could be uncovered, different filter settings were tested to improve mapping quality of the PVAC. Ablation was continued until complete PV isolation (PVI) was confirmed with the Orbiter. Baseline mapping demonstrated a good correlation between the Orbiter and PVAC. Mapping accuracy using the PVAC for mapping and ablation was 94% (74 of 79 PVs). Additional mapping with the Orbiter improved the PV isolation rate to 99%. Adjustment of filter settings failed to improve quality of the PV signals compared with standard filter settings. Using the PVAC as a stand-alone strategy for mapping and ablation, one should be aware that in some cases, different signal morphology mimics PVI isolation. Adjustment of filter settings failed to improve signal quality. The use of an additional mapping catheter is recommended to become familiar with the particular signal morphology during the first PVAC cases or whenever there is a doubt about successful isolation of the pulmonary veins.
Method, systems, and computer program products for implementing function-parallel network firewall
Fulp, Errin W [Winston-Salem, NC; Farley, Ryan J [Winston-Salem, NC
2011-10-11
Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.
FIR filters for hardware-based real-time multi-band image blending
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Leblebici, Yusuf
2015-02-01
Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
Least squares restoration of multichannel images
NASA Technical Reports Server (NTRS)
Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.
1991-01-01
Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.
UV filters for lighting of plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doehring, T.; Koefferlein, M.; Thiel, S.
1994-12-31
Different filter glasses are available which provide absorption properties suitable for gradual changes of the spectral UV-B illumination of artificial lighting. Using a distinct set of lamps and filter glasses an acceptable simulation of the UV-B part of natural global radiation can be achieved. The ageing of these and other filter materials under the extreme UV radiation in the lamphouse of a solar simulator is presently unavoidable. This instability can be dealt with only by a precise spectral monitoring and by replacing the filters accordingly. For this reason attempts would be useful to develop real ozone filters which can replacemore » glass filters. In any case chamber experiments require a careful selection of the filter material used and must be accompanied by a continuous UV-B monitoring.« less
Bflinks: Reliable Bugfix Links via Bidirectional References and Tuned Heuristics
2014-01-01
Background. Data from software version archives and defect databases can be used for defect insertion circumstance analysis and defect prediction. The first step in such analyses is identifying defect-correcting changes in the version archive (bugfix commits) and enriching them with additional metadata by establishing bugfix links to corresponding entries in the defect database. Candidate bugfix commits are typically identified via heuristic string matching on the commit message. Research Questions. Which filters could be used to obtain a set of bugfix links? How to tune their parameters? What accuracy is achieved? Method. We analyze a modular set of seven independent filters, including new ones that make use of reverse links, and evaluate visual heuristics for setting cutoff parameters. For a commercial repository, a product expert manually verifies over 2500 links to validate the results with unprecedented accuracy. Results. The heuristics pick a very good parameter value for five filters and a reasonably good one for the sixth. The combined filtering, called bflinks, provides 93% precision and only 7% results loss. Conclusion. Bflinks can provide high-quality results and adapts to repositories with different properties. PMID:27433506
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Fetterly, Kenneth A
2010-11-01
Minimizing the x-ray radiation dose is an important aspect of patient safety during interventional fluoroscopy procedures. This work investigates the practical aspects of an additional 0.1 mm Cu x-ray beam spectral filter applied to cine acquisition mode imaging on patient dose and image quality. Measurements were acquired using clinical interventional imaging systems. Acquisition images of Solid Water phantoms (15-40 cm) were acquired using x-ray beams with the x-ray tube inherent filtration and using an additional 0.1 mm Cu x-ray beam spectral filter. The skin entrance air kerma (dose) rate was measured and the signal difference to noise ratio (SDNR) of an iodine target embedded into the phantom was calculated to assess image quality. X-ray beam parameters were recorded and analyzed and a primary x-ray beam simulation was performed to assess additional x-ray tube burden attributable to the Cu filter. For all phantom thicknesses, the 0.1 mm Cu filter resulted in a 40% reduction in the entrance air kerma rate to the phantoms and a 9% reduction in the SDNR of the iodine phantom. The expected additional tube load required by the 0.1 mm Cu filter ranged from 11% for a 120 kVp x-ray beam to 43% for a 60 kVp beam. For these clinical systems, use of the 0.1 mm Cu filter resulted in a favorable compromise between reduced skin dose rate and image quality and increased x-ray tube burden.
NASA Astrophysics Data System (ADS)
Fleming, L.; Gibson, D.; Song, S.; Hutson, D.; Reid, S.; MacGregor, C.; Clark, C.
2017-02-01
Mid-IR carbon dioxide (CO2) gas sensing is critical for monitoring in respiratory care, and is finding increasing importance in surgical anaesthetics where nitrous oxide (N2O) induced cross-talk is a major obstacle to accurate CO2 monitoring. In this work, a novel, solid state mid-IR photonics based CO2 gas sensor is described, and the role that 1- dimensional photonic crystals, often referred to as multilayer thin film optical coatings [1], play in boosting the sensor's capability of gas discrimination is discussed. Filter performance in isolating CO2 IR absorption is tested on an optical filter test bed and a theoretical gas sensor model is developed, with the inclusion of a modelled multilayer optical filter to analyse the efficacy of optical filtering on eliminating N2O induced cross-talk for this particular gas sensor architecture. Future possible in-house optical filter fabrication techniques are discussed. As the actual gas sensor configuration is small, it would be challenging to manufacture a filter of the correct size; dismantling the sensor and mounting a new filter for different optical coating designs each time would prove to be laborious. For this reason, an optical filter testbed set-up is described and, using a commercial optical filter, it is demonstrated that cross-talk can be considerably reduced; cross-talk is minimal even for very high concentrations of N2O, which are unlikely to be encountered in exhaled surgical anaesthetic patient breath profiles. A completely new and versatile system for breath emulation is described and the capability it has for producing realistic human exhaled CO2 vs. time waveforms is shown. The cross-talk inducing effect that N2O has on realistic emulated CO2 vs. time waveforms as measured using the NDIR gas sensing technique is demonstrated and the effect that optical filtering will have on said cross-talk is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Cormack, R; Bhagwat, M
Purpose: Gold nanoparticles (AuNP) are multifunctional platforms ideal for drug delivery, targeted imaging and radiosensitization. We have investigated quantitative imaging of AuNPs using on board imager (OBI) cone beam computed tomography (CBCT). To this end, we also present, for the first time, a novel method for k-edge imaging of AuNP by filter-based spectral shaping. Methods: We used a digital 25 cm diameter water phantom, embedded with 3 cm spheres filled with AuNPs of different concentrations (0 mg/ml – 16 mg/ml). A poly-energetic X-ray spectrum of 140 kVp from a conventional X-ray tube is shaped by balanced K-edge filters to createmore » an excess of photons right above the K-edge of gold at 80.7 keV. The filters consist of gold, tin, copper and aluminum foils. The phantom with appropriately assigned attenuation coefficients is forward projected onto a detector for each energy bin and then integrated. FKD reconstruction is performed on the integrated projections. Scatter, detector efficiency and noise are included. Results: We found that subtracting the results of two filter sets (Filter A:127 µm gold foil with 254 µm tin, 330 µm copper and 1 mm aluminum, and Filter B: 635 µm tin with 264 µm copper and 1 mm aluminum), provides substantial image contrast. The resulting filtered spectra match well below 80.7 keV, while maintaining sufficient X-ray quanta just above that. Voxel intensities of AuNP containing spheres increase linearly with AuNP concentration. K-edge imaging provides 18% more sensitivity than the tin filter alone, and 38% more sensitivity than the gold filter alone. Conclusion: We have shown that it is feasible to quantitatively detect AuNP distributions in a patient-sized phantom using clinical CBCT and K-edge spectral shaping.« less
Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters
NASA Astrophysics Data System (ADS)
John, Sarah
1998-01-01
A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.
NASA Technical Reports Server (NTRS)
Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.
2012-01-01
Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.
Prototype high resolution multienergy soft x-ray array for NSTX.
Tritz, K; Stutman, D; Delgado-Aparicio, L; Finkenthal, M; Kaita, R; Roquemore, L
2010-10-01
A novel diagnostic design seeks to enhance the capability of multienergy soft x-ray (SXR) detection by using an image intensifier to amplify the signals from a larger set of filtered x-ray profiles. The increased number of profiles and simplified detection system provides a compact diagnostic device for measuring T(e) in addition to contributions from density and impurities. A single-energy prototype system has been implemented on NSTX, comprised of a filtered x-ray pinhole camera, which converts the x-rays to visible light using a CsI:Tl phosphor. SXR profiles have been measured in high performance plasmas at frame rates of up to 10 kHz, and comparisons to the toroidally displaced tangential multi-energy SXR have been made.
Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L
2018-03-01
An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.
Emission rates of sulfur dioxide, trace gases and metals from Mount Erebus, Antartica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyle, P.R.; Meeker, K.; Finnegan, D.
1990-11-01
SO{sub 2} emission rates have been measured annually since 1983 at Mount Erebus, Antarctica by correlation spectrometer (COSPEC V). Following a 4 month period of sustained strombolian activity in late 1984, SO{sub 2} emissions declined from 230 Mg/day in 1983 to 25 Mg/day and then slowly increased from 16 Mg/day in 1985 to 51 Mg/day in 1987. Nine sets of filter packs containing partcle and {sup 7}LiOH treated filters were collected in the plume in 1986 and analyzed by neutron activation. Using the COSPEC data and measured element/S ratios on the filters, emission rates have been determined for trace gasesmore » and metals. The authors infer HCl and HF emissions in 1983 to be about 1200 and 500 Mg/day, respectively. Mt Erebus has therefore been an important source of halogens to the Anarctic atmosphere and could be responsible for excess Cl found in Central Antarctica snow.« less
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles
2016-01-01
Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Consensus on trauma care audit filters was built between twenty panellists using a Delphi technique with four anonymous, iterative surveys designed to elicit: (i) trauma care processes to be measured; (ii) important features of audit filters for the district-level hospital setting; and (iii) potentially useful filters. Filters were ranked on a scale from 0 to 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Panellists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1--0.58; Round 2--0.66; Round 3--0.76; and Round 4--0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage--vital signs are recorded within 15 min of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation--a large bore IV was placed within 15 min of patient arrival; referral--if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step towards improving care for the injured at district-level hospitals in LMICs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Solid colloidal optical wavelength filter
Alvarez, Joseph L.
1992-01-01
A solid colloidal optical wavelength filter includes a suspension of spheal particles dispersed in a coagulable medium such as a setting plastic. The filter is formed by suspending spherical particles in a coagulable medium; agitating the particles and coagulable medium to produce an emulsion of particles suspended in the coagulable medium; and allowing the coagulable medium and suspended emulsion of particles to cool.
ERIC Educational Resources Information Center
Recker, Mimi M.; Walker, Andrew; Lawless, Kimberly
2003-01-01
Examines results from one pilot study and two empirical studies of a collaborative filtering system applied in higher education settings. Explains the use of collaborative filtering in electronic commerce and suggests it can be adapted to education to help find useful Web resources and to bring people together with similar interests and beliefs.…
40 CFR 1066.815 - Exhaust emission test procedures for FTP testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... must meet the requirements related to filter face velocity as described in 40 CFR 1065.170(c)(1)(vi..., set the filter face velocity to a weighting target of 1.0 to meet the requirements of 40 CFR 1065.170(c)(1)(vi). Allow filter face velocity to decrease as a percentage of the weighting factor if the...
Study of the use of a nonlinear, rate limited, filter on pilot control signals
NASA Technical Reports Server (NTRS)
Adams, J. J.
1977-01-01
The use of a filter on the pilot's control output could improve the performance of the pilot-aircraft system. What is needed is a filter with a sharp high frequency cut-off, no resonance peak, and a minimum of lag at low frequencies. The present investigation studies the usefulness of a nonlinear, rate limited, filter in performing the needed function. The nonlinear filter is compared with a linear, first order filter, and no filter. An analytical study using pilot models and a simulation study using experienced test pilots was performed. The results showed that the nonlinear filter does promote quick, steady maneuvering. It is shown that the nonlinear filter attenuates the high frequency remnant and adds less phase lag to the low frequency signal than does the linear filter. It is also shown that the rate limit in the nonlinear filter can be set to be too restrictive, causing an unstable pilot-aircraft system response.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Editorial: Reviewer Selection Process and New Areas of Expertise in GEMS
NASA Technical Reports Server (NTRS)
Liemohn, Michael W.; Balikhin, Michael; Kepko, Larry; Rodger, Alan; Wang, Yuming
2016-01-01
One method of selecting potential reviewers for papers submitted to the Journal of Geophysical Research Space Physics is to filter the user database within the Geophysical Electronic Manuscript System (GEMS) by areas of expertise. The list of these areas in GEMS can be self selected by users in their profile settings. The Editors have added 18 new entries to this list, an increase of 33 more than the previous 55 entries. All space physicists are strongly encouraged to update their profile settings in GEMS, especially their areas of expertise selections, and details of how to do this are provided.
Editorial: Reviewer selection process and new areas of expertise in GEMS
NASA Astrophysics Data System (ADS)
Liemohn, Michael W.; Balikhin, Michael; Kepko, Larry; Rodger, Alan; Wang, Yuming
2016-06-01
One method of selecting potential reviewers for papers submitted to the Journal of Geophysical Research Space Physics is to filter the user database within the Geophysical Electronic Manuscript System (GEMS) by areas of expertise. The list of these areas in GEMS can be self selected by users in their profile settings. The Editors have added 18 new entries to this list, an increase of 33% more than the previous 55 entries. All space physicists are strongly encouraged to update their profile settings in GEMS, especially their areas of expertise selections, and details of how to do this are provided.
A collaborative filtering recommendation algorithm based on weighted SimRank and social trust
NASA Astrophysics Data System (ADS)
Su, Chang; Zhang, Butao
2017-05-01
Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Flatness-based model inverse for feed-forward braking control
NASA Astrophysics Data System (ADS)
de Vries, Edwin; Fehn, Achim; Rixen, Daniel
2010-12-01
For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.
Learnable despeckling framework for optical coherence tomography images
NASA Astrophysics Data System (ADS)
Adabi, Saba; Rashedi, Elaheh; Clayton, Anne; Mohebbi-Kalkhoran, Hamed; Chen, Xue-wen; Conforto, Silvia; Nasiriavanaki, Mohammadreza
2018-01-01
Optical coherence tomography (OCT) is a prevalent, interferometric, high-resolution imaging method with broad biomedical applications. Nonetheless, OCT images suffer from an artifact called speckle, which degrades the image quality. Digital filters offer an opportunity for image improvement in clinical OCT devices, where hardware modification to enhance images is expensive. To reduce speckle, a wide variety of digital filters have been proposed; selecting the most appropriate filter for an OCT image/image set is a challenging decision, especially in dermatology applications of OCT where a different variety of tissues are imaged. To tackle this challenge, we propose an expandable learnable despeckling framework, we call LDF. LDF decides which speckle reduction algorithm is most effective on a given image by learning a figure of merit (FOM) as a single quantitative image assessment measure. LDF is learnable, which means when implemented on an OCT machine, each given image/image set is retrained and its performance is improved. Also, LDF is expandable, meaning that any despeckling algorithm can easily be added to it. The architecture of LDF includes two main parts: (i) an autoencoder neural network and (ii) filter classifier. The autoencoder learns the FOM based on several quality assessment measures obtained from the OCT image including signal-to-noise ratio, contrast-to-noise ratio, equivalent number of looks, edge preservation index, and mean structural similarity index. Subsequently, the filter classifier identifies the most efficient filter from the following categories: (a) sliding window filters including median, mean, and symmetric nearest neighborhood, (b) adaptive statistical-based filters including Wiener, homomorphic Lee, and Kuwahara, and (c) edge preserved patch or pixel correlation-based filters including nonlocal mean, total variation, and block matching three-dimensional filtering.
Fan filters, the 3-D Radon transform, and image sequence analysis.
Marzetta, T L
1994-01-01
This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.
Psycho-physiological training approach for amputee rehabilitation.
Dhal, Chandan; Wahi, Akshat
2015-01-01
Electromyography (EMG) signals are very noisy and difficult to acquire. Conventional techniques involve amplification and filtering through analog circuits, which makes the system very unstable. The surface EMG signals lie in the frequency range of 6Hz to 600Hz, and the dominant range is between the ranges from 20Hz to 150Hz. 1 Our project aimed to analyze an EMG signal effectively over its complete frequency range. To remove these defects, we designed what we think is an easy, effective, and reliable signal processing technique. We did spectrum analysis, so as to perform all the processing such as amplification, filtering, and thresholding on an Arduino Uno board, hence removing the need for analog amplifiers and filtering circuits, which have stability issues. The conversion of time domain to frequency domain of any signal gives a detailed data of the signal set. Our main aim is to use this useful data for an alternative methodology for rehabilitation called a psychophysiological approach to rehabilitation in prosthesis, which can reduce the cost of the myoelectric arm, as well as increase its efficiency. This method allows the user to gain control over their muscle sets in a less stressful environment. Further, we also have described how our approach is viable and can benefit the rehabilitation process. We used our DSP EMG signals to play an online game and showed how this approach can be used in rehabilitation.
2017-01-01
The selectivity filter of the KcsA K+ channel has two typical conformations—the conductive and the collapsed conformations, respectively. The transition from the conductive to the collapsed filter conformation can represent the process of inactivation that depends on many environmental factors. Water molecules permeating behind the filter can influence the collapsed filter stability. Here we perform the molecular dynamics simulations to study the stability of the collapsed filter of the KcsA K+ channel under the different water patterns. We find that the water patterns are dynamic behind the collapsed filter and the filter stability increases with the increasing number of water molecules. In addition, the stability increases significantly when water molecules distribute uniformly behind the four monomeric filter chains, and the stability is compromised if water molecules only cluster behind one or two adjacent filter chains. The altered filter stabilities thus suggest that the collapsed filter can inactivate gradually under the dynamic water patterns. We also demonstrate how the different water patterns affect the filter recovery from the collapsed conformation. PMID:29049423
Wu, Di
2017-01-01
The selectivity filter of the KcsA K+ channel has two typical conformations-the conductive and the collapsed conformations, respectively. The transition from the conductive to the collapsed filter conformation can represent the process of inactivation that depends on many environmental factors. Water molecules permeating behind the filter can influence the collapsed filter stability. Here we perform the molecular dynamics simulations to study the stability of the collapsed filter of the KcsA K+ channel under the different water patterns. We find that the water patterns are dynamic behind the collapsed filter and the filter stability increases with the increasing number of water molecules. In addition, the stability increases significantly when water molecules distribute uniformly behind the four monomeric filter chains, and the stability is compromised if water molecules only cluster behind one or two adjacent filter chains. The altered filter stabilities thus suggest that the collapsed filter can inactivate gradually under the dynamic water patterns. We also demonstrate how the different water patterns affect the filter recovery from the collapsed conformation.
Protein Adsorption to In-Line Filters of Intravenous Administration Sets.
Besheer, Ahmed
2017-10-01
Ensuring compatibility of administered therapeutic proteins with intravenous administration sets is an important regulatory requirement. A low-dose recovery during administration of low protein concentrations is among the commonly observed incompatibilities, and it is mainly due to adsorption to in-line filters. To better understand this phenomenon, we studied the adsorption of 4 different therapeutic proteins (2 IgG1s, 1 IgG4, and 1 Fc fusion protein) diluted to 0.01 mg/mL in 5% glucose (B. Braun EcoFlac; B. Braun Melsungen AG, Melsungen, Germany) or 0.9% sodium chloride (NaCl; Freeflex; Fresenius Kabi, Friedberg, Germany) solutions to 8 in-line filters (5 positively charged and 3 neutral filters made of different polymers and by different suppliers). The results show certain patterns of protein adsorption, which depend to a large extent on the dilution solution and filter material, and to a much lower extent on the proteins' biophysical properties. Investigation of the filter membranes' zeta potential showed a correlation between the observed adsorption pattern in 5% glucose solution and the filter's surface charge, with higher protein adsorption for the strongly negatively charged membranes. In 0.9% NaCl solution, the surface charges are masked, leading to different adsorption patterns. These results contribute to the general understanding of the protein adsorption to IV infusion filters and allow the design of more efficient compatibility studies. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Updating the OMERACT filter: core areas as a basis for defining core outcome sets.
Kirwan, John R; Boers, Maarten; Hewlett, Sarah; Beaton, Dorcas; Bingham, Clifton O; Choy, Ernest; Conaghan, Philip G; D'Agostino, Maria-Antonietta; Dougados, Maxime; Furst, Daniel E; Guillemin, Francis; Gossec, Laure; van der Heijde, Désirée M; Kloppenburg, Margreet; Kvien, Tore K; Landewé, Robert B M; Mackie, Sarah L; Matteson, Eric L; Mease, Philip J; Merkel, Peter A; Ostergaard, Mikkel; Saketkoo, Lesley Ann; Simon, Lee; Singh, Jasvinder A; Strand, Vibeke; Tugwell, Peter
2014-05-01
The Outcome Measures in Rheumatology (OMERACT) Filter provides guidelines for the development and validation of outcome measures for use in clinical research. The "Truth" section of the OMERACT Filter presupposes an explicit framework for identifying the relevant core outcomes that are universal to all studies of the effects of intervention effects. There is no published outline for instrument choice or development that is aimed at measuring outcome, was derived from broad consensus over its underlying philosophy, or includes a structured and documented critique. Therefore, a new proposal for defining core areas of measurement ("Filter 2.0 Core Areas of Measurement") was presented at OMERACT 11 to explore areas of consensus and to consider whether already endorsed core outcome sets fit into this newly proposed framework. Discussion groups critically reviewed the extent to which case studies of current OMERACT Working Groups complied with or negated the proposed framework, whether these observations had a more general application, and what issues remained to be resolved. Although there was broad acceptance of the framework in general, several important areas of construction, presentation, and clarity of the framework were questioned. The discussion groups and subsequent feedback highlighted 20 such issues. These issues will require resolution to reach consensus on accepting the proposed Filter 2.0 framework of Core Areas as the basis for the selection of Core Outcome Domains and hence appropriate Core Outcome Sets for clinical trials.
NASA Astrophysics Data System (ADS)
Breier, J. A.; Sheik, C. S.; Gomez-Ibanez, D.; Sayre-McCord, R. T.; Sanger, R.; Rauch, C.; Coleman, M.; Bennett, S. A.; Cron, B. R.; Li, M.; German, C. R.; Toner, B. M.; Dick, G. J.
2014-12-01
A new tool was developed for large volume sampling to facilitate marine microbiology and biogeochemical studies. It was developed for remotely operated vehicle and hydrocast deployments, and allows for rapid collection of multiple sample types from the water column and dynamic, variable environments such as rising hydrothermal plumes. It was used successfully during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Suspended Particulate Rosette V2 large volume multi-sampling system allows for the collection of 14 sample sets per deployment. Each sample set can include filtered material, whole (unfiltered) water, and filtrate. Suspended particulate can be collected on filters up to 142 mm in diameter and pore sizes down to 0.2 μm. Filtration is typically at flowrates of 2 L min-1. For particulate material, filtered volume is constrained only by sampling time and filter capacity, with all sample volumes recorded by digital flowmeter. The suspended particulate filter holders can be filled with preservative and sealed immediately after sample collection. Up to 2 L of whole water, filtrate, or a combination of the two, can be collected as part of each sample set. The system is constructed of plastics with titanium fasteners and nickel alloy spring loaded seals. There are no ferrous alloys in the sampling system. Individual sample lines are prefilled with filtered, deionized water prior to deployment and remain sealed unless a sample is actively being collected. This system is intended to facilitate studies concerning the relationship between marine microbiology and ocean biogeochemistry.
SigReannot-mart: a query environment for expression microarray probe re-annotations.
Moreews, François; Rauffet, Gaelle; Dehais, Patrice; Klopp, Christophe
2011-01-01
Expression microarrays are commonly used to study transcriptomes. Most of the arrays are now based on oligo-nucleotide probes. Probe design being a tedious task, it often takes place once at the beginning of the project. The oligo set is then used for several years. During this time period, the knowledge gathered by the community on the genome and the transcriptome increases and gets more precise. Therefore re-annotating the set is essential to supply the biologists with up-to-date annotations. SigReannot-mart is a query environment populated with regularly updated annotations for different oligo sets. It stores the results of the SigReannot pipeline that has mainly been used on farm and aquaculture species. It permits easy extraction in different formats using filters. It is used to compare probe sets on different criteria, to choose the set for a given experiment to mix probe sets in order to create a new one.
Dense grid of narrow bandpass filters for the JST/T250 telescope: summary of results
NASA Astrophysics Data System (ADS)
Brauneck, Ulf; Sprengard, Ruediger; Bourquin, Sebastien; Marín-Franch, Antonio
2018-01-01
On the Javalambre mountain in Spain, the Centro de Estudios de Fisica del Cosmos de Aragon has setup two telescopes, the JST/T250 and the JAST/T80. The JAST/T80 telescope integrates T80Cam, a large format, single CCD camera while the JST/T250 will mount the JPCam instrument, a 1.2Gpix camera equipped with a 14-CCD mosaic using the new large format e2v 9.2k×9.2k 10-μm pixel detectors. Both T80Cam and JPCam integrate a large number of filters in dimensions of 106.8×106.8 mm2 and 101.7×95.5 mm2, respectively. For this instrument, SCHOTT manufactured 56 specially designed steep edged bandpass interference filters, which were recently completed. The filter set consists of bandpass filters in the range between 348.5 and 910 nm and a longpass filter at 915 nm. Most of the filters have full-width at half-maximum (FWHM) of 14.5 nm and a blocking between 250 and 1050 nm with optical density of OD5. Absorptive color glass substrates in combination with interference filters were used to minimize residual reflection in order to avoid ghost images. In spite of containing absorptive elements, the filters show the maximum possible transmission. This was achieved by using magnetron sputtering for the filter coating process. The most important requirement for the continuous photometric survey is the tight tolerancing of the central wavelengths and FWHM of the filters. This insures each bandpass has a defined overlap with its neighbors. A high image quality required a low transmitted wavefront error (<λ/4 locally and <λ/2 on the whole aperture), which was achieved even by combining two or three substrates. We report on the spectral and interferometric results measured on the whole set of filters.
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
40 CFR 86.112-91 - Weighing chamber (or room) and microgram balance specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature of the chamber in which the particulate filters are conditioned and weighed shall be maintained to within ±10 °F (6 °C) of a set point between 68 °F (20 °C) and 86 °F (30 °C) during all filter conditioning and filter weighing. A continuous recording of the temperature is required. (2) Humidity. The...
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.
Auer, Lucas; Mariadassou, Mahendra; O'Donohue, Michael; Klopp, Christophe; Hernandez-Raquet, Guillermina
2017-11-01
Next-generation sequencing technologies give access to large sets of data, which are extremely useful in the study of microbial diversity based on 16S rRNA gene. However, the production of such large data sets is not only marred by technical biases and sequencing noise but also increases computation time and disc space use. To improve the accuracy of OTU predictions and overcome both computations, storage and noise issues, recent studies and tools suggested removing all single reads and low abundant OTUs, considering them as noise. Although the effect of applying an OTU abundance threshold on α- and β-diversity has been well documented, the consequences of removing single reads have been poorly studied. Here, we test the effect of singleton read filtering (SRF) on microbial community composition using in silico simulated data sets as well as sequencing data from synthetic and real communities displaying different levels of diversity and abundance profiles. Scalability to large data sets is also assessed using a complete MiSeq run. We show that SRF drastically reduces the chimera content and computational time, enabling the analysis of a complete MiSeq run in just a few minutes. Moreover, SRF accurately determines the actual community diversity: the differences in α- and β-community diversity obtained with SRF and standard procedures are much smaller than the intrinsic variability of technical and biological replicates. © 2017 John Wiley & Sons Ltd.
On Applications of Pyramid Doubly Joint Bilateral Filtering in Dense Disparity Propagation
NASA Astrophysics Data System (ADS)
Abadpour, Arash
2014-06-01
Stereopsis is the basis for numerous tasks in machine vision, robotics, and 3D data acquisition and processing. In order for the subsequent algorithms to function properly, it is important that an affordable method exists that, given a pair of images taken by two cameras, can produce a representation of disparity or depth. This topic has been an active research field since the early days of work on image processing problems and rich literature is available on the topic. Joint bilateral filters have been recently proposed as a more affordable alternative to anisotropic diffusion. This class of image operators utilizes correlation in multiple modalities for purposes such as interpolation and upscaling. In this work, we develop the application of bilateral filtering for converting a large set of sparse disparity measurements into a dense disparity map. This paper develops novel methods for utilizing bilateral filters in joint, pyramid, and doubly joint settings, for purposes including missing value estimation and upscaling. We utilize images of natural and man-made scenes in order to exhibit the possibilities offered through the use of pyramid doubly joint bilateral filtering for stereopsis.
Isose, Sagiri; Misawa, Sonoko; Sonoo, Masahiro; Shimuzu, Toshio; Oishi, Chizuko; Shibuya, Kazumoto; Nasu, Saiko; Sekiguchi, Yukari; Mitsuma, Satsuki; Beppu, Minako; Omori, Shigeki; Komori, Tetsuo; Kokubun, Norito; Inaba, Akira; Hirashima, Fumiko; Kuwabara, Satoshi
2014-10-01
In current electrodiagnostic criteria for chronic inflammatory demyelinating polyneuropathy, the cutoff values of distal compound muscle action potential (DCMAP) duration are defined using electromyogram low-cut filter setting of 20 Hz. We aimed to assess effects of low-cut filter on DCMAP duration (10 vs. 20 Hz). We prospectively measured DCMAP duration in 130 normal controls and 42 patients, fulfilling diagnostic criteria for typical chronic inflammatory demyelinating polyneuropathy by European Federation of Neurological Societies/Peripheral Nerve Society. Distal compound muscle action potential duration was significantly shortened with 20-Hz than 10-Hz filtering. When the cutoff values were defined as the upper limit of normal (ULN, mean + 2.5SD), the sensitivity/specificity was 67%/95% in 10-Hz recordings, and 69%/95% in 20-Hz recordings. This diagnostic accuracy was similar to that defined by receiver operating characteristic analyses. Distal compound muscle action potential duration significantly affected by the low-cut electromyogram filter setting, but with at least 10 and 20 Hz, the diagnostic accuracy is similar.
Application of Consider Covariance to the Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Lundberg, John B.
1996-01-01
The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.
Implementation of real-time digital signal processing systems
NASA Technical Reports Server (NTRS)
Narasimha, M.; Peterson, A.; Narayan, S.
1978-01-01
Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Vijayakumar, R.; Berger, Gordon M.; Perry, Jay L.
2017-01-01
The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
Filter Efficiency and Pressure Testing of Returned ISS Bacterial Filter Elements (BFEs)
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.
2017-01-01
The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
A user-friendly technical set-up for infrared photography of forensic findings.
Rost, Thomas; Kalberer, Nicole; Scheurer, Eva
2017-09-01
Infrared photography is interesting for a use in forensic science and forensic medicine since it reveals findings that normally are almost invisible to the human eye. Originally, infrared photography has been made possible by the placement of an infrared light transmission filter screwed in front of the camera objective lens. However, this set-up is associated with many drawbacks such as the loss of the autofocus function, the need of an external infrared source, and long exposure times which make the use of a tripod necessary. These limitations prevented up to now the routine application of infrared photography in forensics. In this study the use of a professional modification inside the digital camera body was evaluated regarding camera handling and image quality. This permanent modification consisted of the replacement of the in-built infrared blocking filter by an infrared transmission filter of 700nm and 830nm, respectively. The application of this camera set-up for the photo-documentation of forensically relevant post-mortem findings was investigated in examples of trace evidence such as gunshot residues on the skin, in external findings, e.g. hematomas, as well as in an exemplary internal finding, i.e., Wischnewski spots in a putrefied stomach. The application of scattered light created by indirect flashlight yielded a more uniform illumination of the object, and the use of the 700nm filter resulted in better pictures than the 830nm filter. Compared to pictures taken under visible light, infrared photographs generally yielded better contrast. This allowed for discerning more details and revealed findings which were not visible otherwise, such as imprints on a fabric and tattoos in mummified skin. The permanent modification of a digital camera by building in a 700nm infrared transmission filter resulted in a user-friendly and efficient set-up which qualified for the use in daily forensic routine. Main advantages were a clear picture in the viewfinder, an auto-focus usable over the whole range of infrared light, and the possibility of using short shutter speeds which allows taking infrared pictures free-hand. The proposed set-up with a modification of the camera allows a user-friendly application of infrared photography in post-mortem settings. Copyright © 2017 Elsevier B.V. All rights reserved.
A simple new filter for nonlinear high-dimensional data assimilation
NASA Astrophysics Data System (ADS)
Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo
2015-04-01
The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laboure, Vincent M., E-mail: vincent.laboure@tamu.edu; McClarren, Ryan G., E-mail: rgm@tamu.edu; Hauck, Cory D., E-mail: hauckc@ornl.gov
2016-09-15
In this work, we provide a fully-implicit implementation of the time-dependent, filtered spherical harmonics (FP{sub N}) equations for non-linear, thermal radiative transfer. We investigate local filtering strategies and analyze the effect of the filter on the conditioning of the system, showing in particular that the filter improves the convergence properties of the iterative solver. We also investigate numerically the rigorous error estimates derived in the linear setting, to determine whether they hold also for the non-linear case. Finally, we simulate a standard test problem on an unstructured mesh and make comparisons with implicit Monte Carlo (IMC) calculations.
Lalander, Cecilia; Dalahmeh, Sahar; Jönsson, Håkan; Vinnerås, Björn
2013-01-01
With a growing world population, the lack of reliable water sources is becoming an increasing problem. Reusing greywater could alleviate this problem. When reusing greywater for crop irrigation it is paramount to ensure the removal of pathogenic organisms. This study compared the pathogen removal efficiency of pine bark and activated charcoal filters with that of conventional sand filters at three organic loading rates. The removal efficiency of Escherichia coli O157:H7 decreased drastically when the organic loading rate increased fivefold in the charcoal and sand filters, but increased by 2 log10 in the bark filters. The reduction in the virus model organism coliphage phiX174 remained unchanged with increasing organic loading in the charcoal and sand filters, but increased by 2 log10 in the bark filters. Thus, bark was demonstrated to be the most promising material for greywater treatment in terms of pathogen removal.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, G
Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth,more » Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.« less
Gaseous microemboli in a pediatric bypass circuit with an unprimed venous line: an in vitro study.
Hudacko, Andrea; Sievert, Alicia; Sistino, Joseph
2009-09-01
Miniaturizing cardiopulmonary bypass (CPB) circuits to reduce hemodilution and allogenic blood product administration is common in cardiac surgery. One major concern associated with smaller CPB circuits is a possible increase in gaseous microemboli (GME) sent to the cerebral vasculature, which is exacerbated by vacuum-assisted venous drainage (VAVD). The use of VAVD has increased with smaller venous line diameter and venous cannulae. This study examines the effects of CPB initiation with an unprimed venous line and VAVD in a pediatric circuit. A CPB circuit was set up with reservoir, oxygenator, and arterial filter with a bag reservoir to simulate the patient. All trials were done in vitro, and GME were measured using the EDAC Quantifier by Luna Innovations. EDAC sensors were placed proximal and distal to the oxygenator and distal to the arterial filter. Group 1 was the control group with no VAVD and a primed venous line. Groups 2, 3, and 4 used an unprimed venous line and VAVD of -40, -20, and -10 mmHg, respectively. Total microemboli counts and total embolic load in micrometers were measured at each sensor. Groups 2 (12,379.00 +/- 3180.37) and 3 (8296.67 +/- 2818.76) had significantly more microemboli than group 1 (923.33 +/- 796.08, p < .05) at the pre-oxygenator sensor. Group 2 (57.33 +/- 25.01, p < .05) had significantly more microemboli than group 1 (5.33 +/- 3.21) at the post-oxygenator sensor. No other findings were statistically significant. The results suggest that, if an oxygenator and arterial filter with sufficient air handling capabilities are used, this method to reduce prime volume may not increase GME in the arterial line distal to the arterial filter.
Marques, Bruna; Calado, Ricardo; Lillebø, Ana I
2017-12-01
The main objective of this study was to test an innovative biomitigation approach, where polychaete-assisted (Hediste diversicolor) sand filters were combined with the production of Halimione portulacoides in aquaponics, to remediate an organic-rich effluent generated by a super intensive fish farm operating a land-based RAS (Recirculating aquaculture system). The set up included four different experimental combinations that were periodically monitored for 5months. After this period, polychaete-assisted sand filters reduced in 70% the percentage of OM and the average densities increased from ≈400ind.m -2 to 7000ind.m -2 . H. portulacoides in aquaponics contributed to an average DIN (Dissolved inorganic Nitrogen) decrease of 65%, which increased to 67% when preceded by filter tanks stocked with polychaetes. From May until October (5months) halophytes biomass increased from 1.4kgm -2 ±0.7 (initial wet weight) to 18.6kgm -2 ±4.0. Bearing in mind that the uptake of carbon is mostly via photosynthesis and not though the uptake of dissolved inorganic carbon, this represents an approximate incorporation of ≈1.3kgm -2 carbon (C), ≈15gm -2 nitrogen (N) and ≈8gm -2 phosphorus (P) in the aerial part (76% of total biomass), and an approximate incorporation of ≈0.5kgm -2 carbon (C), ≈3gm -2 nitrogen (N) and ≈2gm -2 phosphorus (P) in the roots (24% of total biomass). In the present study, the potential of the two extractive species for biomitigation of a super-intensive marine fish farm effluent could be clearly demonstrated, contributing in this way to potentiate the implementation of more sustainable practices. Copyright © 2017 Elsevier B.V. All rights reserved.
A tool for filtering information in complex systems
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-01-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373
A tool for filtering information in complex systems.
Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N
2005-07-26
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran
2014-04-23
The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less
Golder, Su; Wright, Kath; Loke, Yoon Kong
2018-06-01
Search filter development for adverse effects has tended to focus on retrieving studies of drug interventions. However, a different approach is required for surgical interventions. To develop and validate search filters for medline and Embase for the adverse effects of surgical interventions. Systematic reviews of surgical interventions where the primary focus was to evaluate adverse effect(s) were sought. The included studies within these reviews were divided randomly into a development set, evaluation set and validation set. Using word frequency analysis we constructed a sensitivity maximising search strategy and this was tested in the evaluation and validation set. Three hundred and fifty eight papers were included from 19 surgical intervention reviews. Three hundred and fifty two papers were available on medline and 348 were available on Embase. Generic adverse effects search strategies in medline and Embase could achieve approximately 90% relative recall. Recall could be further improved with the addition of specific adverse effects terms to the search strategies. We have derived and validated a novel search filter that has reasonable performance for identifying adverse effects of surgical interventions in medline and Embase. However, we appreciate the limitations of our methods, and recommend further research on larger sample sizes and prospective systematic reviews. © 2018 The Authors Health Information and Libraries Journal published by John Wiley & Sons Ltd on behalf of Health Libraries Group.
Feasibility Study to Adapt the Microflown Vector Sensor for Underwater Use
2012-12-01
properties were of less importance for this experiment. A calibrated ACO Pacific pressure microphone in combination with an ACO pacific 1/2” preamplifier ... preamplifier was used for amplification and filtering. Pre-amplification was set to 10x and a 1 kHz High pass and 100 kHz Low pass filter was used to reduce...Kjær Turntable system type 9640 Stanford RS preamplifier model SR560 Pre-amplification: 10x High pass filter: 1 kHz Low pass filter: 100 kHz
Optimum filter-based discrimination of neutrons and gamma rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
An optimum filter-based method for discrimination of neutrons and gamma-rays in a mixed radiation field is presented. The existing filter-based implementations of discriminators require sample pulse responses in advance of the experiment run to build the filter coefficients, which makes them less practical. Our novel technique creates the coefficients during the experiment and improves their quality gradually. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. (authors)
A Low-Cost Inkjet-Printed Glucose Test Strip System for Resource-Poor Settings.
Gainey Wilson, Kayla; Ovington, Patrick; Dean, Delphine
2015-06-12
The prevalence of diabetes is increasing in low-resource settings; however, accessing glucose monitoring is extremely difficult and expensive in these regions. Work is being done to address the multitude of issues surrounding diabetes care in low-resource settings, but an affordable glucose monitoring solution has yet to be presented. An inkjet-printed test strip solution is being proposed as a solution to this problem. The use of a standard inkjet printer is being proposed as a manufacturing method for low-cost glucose monitoring test strips. The printer cartridges are filled with enzyme and dye solutions that are printed onto filter paper. The result is a colorimetric strip that turns a blue/green color in the presence of blood glucose. Using a light-based spectroscopic reading, the strips show a linear color change with an R(2) = .99 using glucose standards and an R(2) = .93 with bovine blood. Initial testing with bovine blood indicates that the strip accuracy is comparable to the International Organization for Standardization (ISO) standard 15197 for glucose testing in the 0-350 mg/dL range. However, further testing with human blood will be required to confirm this. A visible color gradient was observed with both the glucose standard and bovine blood experiment, which could be used as a visual indicator in cases where an electronic glucose meter was unavailable. These results indicate that an inkjet-printed filter paper test strip is a feasible method for monitoring blood glucose levels. The use of inkjet printers would allow for local manufacturing to increase supply in remote regions. This system has the potential to address the dire need for glucose monitoring in low-resource settings. © 2015 Diabetes Technology Society.
Application of preprocessing filtering on Decision Tree C4.5 and rough set theory
NASA Astrophysics Data System (ADS)
Chan, Joseph C. C.; Lin, Tsau Y.
2001-03-01
This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.
Enhancement of Chemical Entity Identification in Text Using Semantic Similarity Validation
Grego, Tiago; Couto, Francisco M.
2013-01-01
With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit), maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/. PMID:23658791
SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240
SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.
Youn, Seongwook
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.
Inferior vena cava filter retrievals, standard and novel techniques.
Kuyumcu, Gokhan; Walker, T Gregory
2016-12-01
The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques.
Inferior vena cava filter retrievals, standard and novel techniques
Walker, T. Gregory
2016-01-01
The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques. PMID:28123984
Orbit Determination Using Vinti’s Solution
2016-09-15
Surveillance Network STK Systems Tool Kit TBP Two Body Problem TLE Two-line Element Set xv Acronym Definition UKF Unscented Kalman Filter WPAFB Wright...simplicity, stability, and speed. On the other hand, Kalman filters would be best suited for sequential estimation of stochastic or random components of a...be likened to how an Unscented Kalman Filter samples a system’s nonlinearities directly, avoiding linearizing the dynamics in the partials matrices
A class of optimum digital phase locked loops for the DSN advanced receiver
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Kumar, R.
1985-01-01
A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.
Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin
2014-01-01
This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.
The survival of micro-organisms in space. Further rocket and balloon-borne exposure experiments.
Hotchin, J; Lorenz, P; Markusen, A; Hemenway, C
1967-01-01
This report describes the results of survival studies of terrestrial micro-organisms exposed directly to the space environment on two balloons and in two rocket flights. The work is part of a program to develop techniques for the collection of micro-organisms in the size range of micrometeorite particles in space or non-terrestrial atmospheres, and their return to earth in a viable state for further study. Previous survival studies were reported (J. Hotchin, P. Lorenz and C. Hemenway, Nature 206 (1965) 442) in which a few relatively large area samples of micro-organisms were exposed on millipore filter cemented to aluminum plates. In the present series of experiments, newly developed techniques have resulted in a 25-fold miniaturization resulting in a corresponding increase in the number of experiments performed. This has enabled a statistical evaluation of the results to be made. A total of 756 separate exposure units (each approximately 5 x 5 mm in size) were flown in four experiments, and organisms used were coliphage T1, penicillium roqueforti (THOM) mold spores, poliovirus type I (Pfizer attenuated Sabin vaccine strain), and bacillus subtilis spores. The organisms were deposited either by spraying directly upon the vinyl-coated metal units, or by droplet seeding into shallow depressions in the millipore filter membrane-coated units. Groups of units were prepared comprising fully exposed, inverted (screened by 2 mm of Al), and filter-protected organisms. All of these were included in the flight set, the back up set, and a laboratory control set. The altitude of the exposures varied from 35 km in the balloon experiments to 150 km in the rocket experiments. Times of exposures at altitude were approximately 6 hours for the balloon flights and about 3 minutes for the rocket experiments.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
NASA Astrophysics Data System (ADS)
Kim, K.-h.; Oh, T.-s.; Park, K.-r.; Lee, J. H.; Ghim, Y.-c.
2017-11-01
One factor determining the reliability of measurements of electron temperature using a Thomson scattering (TS) system is transmittance of the optical bandpass filters in polychromators. We investigate the system performance as a function of electron temperature to determine reliable range of measurements for a given set of the optical bandpass filters. We show that such a reliability, i.e., both bias and random errors, can be obtained by building a forward model of the KSTAR TS system to generate synthetic TS data with the prescribed electron temperature and density profiles. The prescribed profiles are compared with the estimated ones to quantify both bias and random errors.
Caesar, Lindsay K; Kvalheim, Olav M; Cech, Nadja B
2018-08-27
Mass spectral data sets often contain experimental artefacts, and data filtering prior to statistical analysis is crucial to extract reliable information. This is particularly true in untargeted metabolomics analyses, where the analyte(s) of interest are not known a priori. It is often assumed that chemical interferents (i.e. solvent contaminants such as plasticizers) are consistent across samples, and can be removed by background subtraction from blank injections. On the contrary, it is shown here that chemical contaminants may vary in abundance across each injection, potentially leading to their misidentification as relevant sample components. With this metabolomics study, we demonstrate the effectiveness of hierarchical cluster analysis (HCA) of replicate injections (technical replicates) as a methodology to identify chemical interferents and reduce their contaminating contribution to metabolomics models. Pools of metabolites with varying complexity were prepared from the botanical Angelica keiskei Koidzumi and spiked with known metabolites. Each set of pools was analyzed in triplicate and at multiple concentrations using ultraperformance liquid chromatography coupled to mass spectrometry (UPLC-MS). Before filtering, HCA failed to cluster replicates in the data sets. To identify contaminant peaks, we developed a filtering process that evaluated the relative peak area variance of each variable within triplicate injections. These interferent peaks were found across all samples, but did not show consistent peak area from injection to injection, even when evaluating the same chemical sample. This filtering process identified 128 ions that appear to originate from the UPLC-MS system. Data sets collected for a high number of pools with comparatively simple chemical composition were highly influenced by these chemical interferents, as were samples that were analyzed at a low concentration. When chemical interferent masses were removed, technical replicates clustered in all data sets. This work highlights the importance of technical replication in mass spectrometry-based studies, and presents a new application of HCA as a tool for evaluating the effectiveness of data filtering prior to statistical analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.
1999-01-01
This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.
MR image reconstruction via guided filter.
Huang, Heyan; Yang, Hang; Wang, Kang
2018-04-01
Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.
Experimental comparison of point-of-use filters for drinking water ultrafiltration.
Totaro, M; Valentini, P; Casini, B; Miccoli, M; Costa, A L; Baggiani, A
2017-06-01
Waterborne pathogens such as Pseudomonas spp. and Legionella spp. may persist in hospital water networks despite chemical disinfection. Point-of-use filtration represents a physical control measure that can be applied in high-risk areas to contain the exposure to such pathogens. New technologies have enabled an extension of filters' lifetimes and have made available faucet hollow-fibre filters for water ultrafiltration. To compare point-of-use filters applied to cold water within their period of validity. Faucet hollow-fibre filters (filter A), shower hollow-fibre filters (filter B) and faucet membrane filters (filter C) were contaminated in two different sets of tests with standard bacterial strains (Pseudomonas aeruginosa DSM 939 and Brevundimonas diminuta ATCC 19146) and installed at points-of-use. Every day, from each faucet, 100 L of water was flushed. Before and after flushing, 250 mL of water was collected and analysed for microbiology. There was a high capacity of microbial retention from filter C; filter B released only low Brevundimonas spp. counts; filter A showed poor retention of both micro-organisms. Hollow-fibre filters did not show good micro-organism retention. All point-of-use filters require an appropriate maintenance of structural parameters to ensure their efficiency. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Peng; Zong, Yichen; Zhang, Yingying; Yang, Mengmeng; Zhang, Rufan; Li, Shuiqing; Wei, Fei
2013-03-01
We fabricated depth-type hierarchical CNT/quartz fiber (QF) filters through in situ growth of CNTs upon quartz fiber (QF) filters using a floating catalyst chemical vapor deposition (CVD) method. The filter specific area of the CNT/QF filters is more than 12 times higher than that of the pristine QF filters. As a result, the penetration of sub-micron aerosols for CNT/QF filters is reduced by two orders of magnitude, which reaches the standard of high-efficiency particulate air (HEPA) filters. Simultaneously, due to the fluffy brush-like hierarchical structure of CNTs on QFs, the pore size of the hybrid filters only has a small increment. The pressure drop across the CNT/QF filters only increases about 50% with respect to that of the pristine QF filters, leading to an obvious increased quality factor of the CNT/QF filters. Scanning electron microscope images reveal that CNTs are very efficient in capturing sub-micron aerosols. Moreover, the CNT/QF filters show high water repellency, implying their superiority for applications in humid conditions.We fabricated depth-type hierarchical CNT/quartz fiber (QF) filters through in situ growth of CNTs upon quartz fiber (QF) filters using a floating catalyst chemical vapor deposition (CVD) method. The filter specific area of the CNT/QF filters is more than 12 times higher than that of the pristine QF filters. As a result, the penetration of sub-micron aerosols for CNT/QF filters is reduced by two orders of magnitude, which reaches the standard of high-efficiency particulate air (HEPA) filters. Simultaneously, due to the fluffy brush-like hierarchical structure of CNTs on QFs, the pore size of the hybrid filters only has a small increment. The pressure drop across the CNT/QF filters only increases about 50% with respect to that of the pristine QF filters, leading to an obvious increased quality factor of the CNT/QF filters. Scanning electron microscope images reveal that CNTs are very efficient in capturing sub-micron aerosols. Moreover, the CNT/QF filters show high water repellency, implying their superiority for applications in humid conditions. Electronic supplementary information (ESI) available: Schematic of the synthesis process of the CNT/QF filter; typical size distribution of atomized polydisperse NaCl aerosols used for air filtration testing; images of a QF filter and a CNT/QF filter; SEM image of a CNT/QF filter after 5 minutes of sonication in ethanol; calculation of porosity and filter specific area. See DOI: 10.1039/c3nr34325a
Effect of operating temperature on styrene mass transfer characteristics in a biotrickling filter.
Parnian, Parham; Zamir, Seyed Morteza; Shojaosadati, Seyed Abbas
2017-05-01
To study the effect of operating temperature on styrene mass transfer from gas to liquid phase in biotrickling filters (BTFs), overall mass transfer coefficient (K L a) was calculated through fitting test data to a general mass balance model under abiotic conditions. Styrene was used as the volatile organic compound and the BTF was packed with a mixture of pall rings and pumice. Operating temperature was set at 30°C and 50°C for mesophilic and thermophilic conditions, respectively. K L a values increased from 54 to 70 h -1 at 30°C and from 60 to 90 h -1 at 50°C, respectively, depending on the countercurrent gas to liquid flow ratio that varied in the range of 7.5-32. Evaluation of styrene mass transfer capacity (MTC) showed that liquid-phase mass transfer resistance decreased as the flow ratio increased at constant temperature. MTC also decreased with an increase in operating temperature. Both gas-liquid partition coefficient and K L a increased with increasing temperature; however the effect on gas-liquid partition coefficient was more significant and served to increase mass transfer limitations. Thermophilic biofiltration on the one hand increases mass transfer limitations, but on the other hand may enhance the biodegradation rate in favor of enhancing BTFs' performance.
Removal of particulate matter emitted from a subway tunnel using magnetic filters.
Son, Youn-Suk; Dinh, Trieu-Vuong; Chung, Sang-Gwi; Lee, Jai-Hyo; Kim, Jo-Chun
2014-01-01
We removed particulate matter (PM) emitted from a subway tunnel using magnetic filters. A magnetic filter system was installed on the top of a ventilation opening. Magnetic field density was increased by increasing the number of permanent magnet layers to determine PM removal characteristics. Moreover, the fan's frequency was adjusted from 30 to 60 Hz to investigate the effect of wind velocity on PM removal efficiency. As a result, PM removal efficiency increased as the number of magnetic filters or fan frequency increased. We obtained maximum removal efficiency of PM10 (52%), PM2.5 (46%), and PM1 (38%) at a 60 Hz fan frequency using double magnetic filters. We also found that the stability of the PM removal efficiency by the double filter (RSD, 3.2-5.8%) was higher than that by a single filter (10.9-24.5%) at all fan operating conditions.
Application of recursive approaches to differential orbit correction of near Earth asteroids
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2016-10-01
Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.
Complications of inferior vena cava filters.
Sella, David M; Oldenburg, W Andrew
2013-03-01
With the introduction of retrievable inferior vena cava filters, the number being placed for protection from pulmonary embolism is steadily increasing. Despite this increased usage, the true incidence of complications associated with inferior vena cava filters is unknown. This article reviews the known complications associated with these filters and suggests recommendations and techniques for inferior vena cava filter removal. Copyright © 2013. Published by Elsevier Inc.
2012-01-01
Background Malaria remains a major cause of morbidity and mortality worldwide. Flow cytometry-based assays that take advantage of fluorescent protein (FP)-expressing malaria parasites have proven to be valuable tools for quantification and sorting of specific subpopulations of parasite-infected red blood cells. However, identification of rare subpopulations of parasites using green fluorescent protein (GFP) labelling is complicated by autofluorescence (AF) of red blood cells and low signal from transgenic parasites. It has been suggested that cell sorting yield could be improved by using filters that precisely match the emission spectrum of GFP. Methods Detection of transgenic Plasmodium falciparum parasites expressing either tdTomato or GFP was performed using a flow cytometer with interchangeable optical filters. Parasitaemia was evaluated using different optical filters and, after optimization of optics, the GFP-expressing parasites were sorted and analysed by microscopy after cytospin preparation and by imaging cytometry. Results A new approach to evaluate filter performance in flow cytometry using two-dimensional dot blot was developed. By selecting optical filters with narrow bandpass (BP) and maximum position of filter emission close to GFP maximum emission in the FL1 channel (510/20, 512/20 and 517/20; dichroics 502LP and 466LP), AF was markedly decreased and signal-background improve dramatically. Sorting of GFP-expressing parasite populations in infected red blood cells at 90 or 95% purity with these filters resulted in 50-150% increased yield when compared to the standard filter set-up. The purity of the sorted population was confirmed using imaging cytometry and microscopy of cytospin preparations of sorted red blood cells infected with transgenic malaria parasites. Discussion Filter optimization is particularly important for applications where the FP signal and percentage of positive events are relatively low, such as analysis of parasite-infected samples with in the intention of gene-expression profiling and analysis. The approach outlined here results in substantially improved yield of GFP-expressing parasites, and requires decreased sorting time in comparison to standard methods. It is anticipated that this protocol will be useful for a wide range of applications involving rare events. PMID:22950515
Gas filter correlation radiometry: Report of panel
NASA Technical Reports Server (NTRS)
Reichle, Henry G., Jr.; Barringer, A. A.; Nichols, Ralph; Russell, James M., III
1987-01-01
To measure the concentration of a gas in the troposphere, the gas filter radiometer correlates the pattern of the spectral lines of a sample of gas contained within the instrument with the pattern of the spectral lines in the upwelling radiation. A schematic diagram of a generalized gas filter radiometer is shown. Three instruments (the Gas Filter Radiometer, GFR; the Halogen Occultation Experiment, HALOE; and the Gas Filter Correlation Spectrometer, GASCOFIL) that have application to remotely measuring tropospheric constituents are described. A set of preliminary calculations to determine the feasibility of performing a multiple-layer, tropospheric carbon monoxide measurement experiment was performed. It can be seen that a three-layer measurement in the troposphere is possible.
Gas filter correlation radiometry: Report of panel
NASA Astrophysics Data System (ADS)
Reichle, Henry G., Jr.; Barringer, A. A.; Nichols, Ralph; Russell, James M., III
1987-02-01
To measure the concentration of a gas in the troposphere, the gas filter radiometer correlates the pattern of the spectral lines of a sample of gas contained within the instrument with the pattern of the spectral lines in the upwelling radiation. A schematic diagram of a generalized gas filter radiometer is shown. Three instruments (the Gas Filter Radiometer, GFR; the Halogen Occultation Experiment, HALOE; and the Gas Filter Correlation Spectrometer, GASCOFIL) that have application to remotely measuring tropospheric constituents are described. A set of preliminary calculations to determine the feasibility of performing a multiple-layer, tropospheric carbon monoxide measurement experiment was performed. It can be seen that a three-layer measurement in the troposphere is possible.
Unambiguous quantum-state filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; CREST, Japan Science and Technology Corporation, Tokyo,
2003-07-01
In this paper, we consider a generalized measurement where one particular quantum signal is unambiguously extracted from a set of noncommutative quantum signals and the other signals are filtered out. Simple expressions for the maximum detection probability and its positive operator valued measure are derived. We apply such unambiguous quantum state filtering to evaluation of the sensing of decoherence channels. The bounds of the precision limit for a given quantum state of probes and possible device implementations are discussed.
Nonlinear filter based decision feedback equalizer for optical communication systems.
Han, Xiaoqi; Cheng, Chi-Hao
2014-04-07
Nonlinear impairments in optical communication system have become a major concern of optical engineers. In this paper, we demonstrate that utilizing a nonlinear filter based Decision Feedback Equalizer (DFE) with error detection capability can deliver a better performance compared with the conventional linear filter based DFE. The proposed algorithms are tested in simulation using a coherent 100 Gb/sec 16-QAM optical communication system in a legacy optical network setting.
Improved hybrid information filtering based on limited time window
NASA Astrophysics Data System (ADS)
Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo
2014-12-01
Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
NASA Technical Reports Server (NTRS)
Womble, M. E.; Potter, J. E.
1975-01-01
A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.
Automated railroad reconstruction from remote sensing image based on texture filter
NASA Astrophysics Data System (ADS)
Xiao, Jie; Lu, Kaixia
2018-03-01
Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.
Gated Sensor Fusion: A way to Improve the Precision of Ambulatory Human Body Motion Estimation.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, Gonzalo
2014-01-01
Human body motion is usually variable in terms of intensity and, therefore, any Inertial Measurement Unit attached to a subject will measure both low and high angular rate and accelerations. This can be a problem for the accuracy of orientation estimation algorithms based on adaptive filters such as the Kalman filter, since both the variances of the process noise and the measurement noise are set at the beginning of the algorithm and remain constant during its execution. Setting fixed noise parameters burdens the adaptation capability of the filter if the intensity of the motion changes rapidly. In this work we present a conjoint novel algorithm which uses a motion intensity detector to dynamically vary the noise statistical parameters of different approaches of the Kalman filter. Results show that the precision of the estimated orientation in terms of the RMSE can be improved up to 29% with respect to the standard fixed-parameters approaches.
Di Bartolomeo, Stefano; Valent, Francesca; Sanson, Gianfranco; Nardi, Giuseppe; Gambale, Giorgio; Barbone, Fabio
2008-09-01
Quality indicators are widely needed for external assessment and comparison of trauma care. It is common to extend the use of the American College of Surgeons Committee on Trauma (ACSCOT) audit filters to this scope. This mandates that their actual link with outcome be demonstrated. Several studies attempted to do so, but with inconsistent risk-adjustment, conflicting results and never using long-term disability as outcome measure, despite its recognised importance. We tried to overcome these limitations. Risk-adjusted analysis of the association of filters 1, 3, 10 and 13 with 30-day mortality and 6-month disability measured with the EQ5D scale. Multivariate logistic and linear regression models were used respectively. The data came from a National Italian Trauma Registry comprising 838 patients with major trauma. Three (1, 3 and 10) of the filters analysed did not show any significant association with either outcome. Filter 13 was associated with decreased mortality and lower (worse) disability scores. Methodological difficulties, incomplete, obsolete or non-generalizable definitions of some filters can explain the generally poor correlation with outcomes. The conflicting association of filter 13 with the two types of outcomes raises some interesting questions about the targeted outcomes in trauma research. It is recommended that further studies develop better quality indicators and test their link with both survival and functional outcome in the same setting where they are applied for assessment and comparison of trauma care.
Frey, Laurent; Masarotto, Lilian; Armand, Marilyn; Charles, Marie-Lyne; Lartigue, Olivier
2015-05-04
Thin film Fabry-Perot filter arrays with high selectivity can be realized with a single patterning step, generating a spatial modulation of the effective refractive index in the optical cavity. In this paper, we investigate the ability of this technology to address two applications in the field of image sensors. First, the spectral tuning may be used to compensate the blue-shift of the filters in oblique incidence, provided the filter array is located in an image plane of an optical system with higher field of view than aperture angle. The technique is analyzed for various types of filters and experimental evidence is shown with copper-dielectric infrared filters. Then, we propose a design of a multispectral filter array with an extended spectral range spanning the visible and near-infrared range, using a single set of materials and realizable on a single substrate.
A tool for filtering information in complex systems
NASA Astrophysics Data System (ADS)
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-07-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.
On the Spectrum of the Plenoptic Function.
Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike
2014-02-01
The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.
Means for limiting and ameliorating electrode shorting
Van Konynenburg, Richard A.; Farmer, Joseph C.
1999-01-01
A fuse and filter arrangement for limiting and ameliorating electrode shorting in capacitive deionization water purification systems utilizing carbon aerogel, for example. This arrangement limits and ameliorates the effects of conducting particles or debonded carbon aerogel in shorting the electrodes of a system such as a capacitive deionization water purification system. This is important because of the small interelectrode spacing and the finite possibility of debonding or fragmentation of carbon aerogel in a large system. The fuse and filter arrangement electrically protect the entire system from shutting down if a single pair of electrodes is shorted and mechanically prevents a conducting particle from migrating through the electrode stack, shorting a series of electrode pairs in sequence. It also limits the amount of energy released in a shorting event. The arrangement consists of a set of circuit breakers or fuses with one fuse or breaker in the power line connected to one electrode of each electrode pair and a set of screens of filters in the water flow channels between each set of electrode pairs.
NASA Astrophysics Data System (ADS)
Miller, R.
2015-12-01
Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results
NASA Astrophysics Data System (ADS)
Starosolski, Roman
2016-07-01
Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.
Filter parameter tuning analysis for operational orbit determination support
NASA Technical Reports Server (NTRS)
Dunham, J.; Cox, C.; Niklewski, D.; Mistretta, G.; Hart, R.
1994-01-01
The use of an extended Kalman filter (EKF) for operational orbit determination support is being considered by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). To support that investigation, analysis was performed to determine how an EKF can be tuned for operational support of a set of earth-orbiting spacecraft. The objectives of this analysis were to design and test a general purpose scheme for filter tuning, evaluate the solution accuracies, and develop practical methods to test the consistency of the EKF solutions in an operational environment. The filter was found to be easily tuned to produce estimates that were consistent, agreed with results from batch estimation, and compared well among the common parameters estimated for several spacecraft. The analysis indicates that there is not a sharply defined 'best' tunable parameter set, especially when considering only the position estimates over the data arc. The comparison of the EKF estimates for the user spacecraft showed that the filter is capable of high-accuracy results and can easily meet the current accuracy requirements for the spacecraft included in the investigation. The conclusion is that the EKF is a viable option for FDD operational support.
Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M
2015-06-21
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
Impact of axial velocity and transmembrane pressure (TMP) on ARP filter performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Burket, P.
2016-02-29
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). Recently, the low filter flux through the ARP of approximately 5 gallons per minute has limited the rate at which radioactive liquid waste can be treated. Salt Batch 6 had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. One potential method for increasing filter flux is to adjust the axial velocity andmore » transmembrane pressure (TMP). SRR requested SRNL to conduct bench-scale filter tests to evaluate the effects of axial velocity and transmembrane pressure on crossflow filter flux. The objective of the testing was to determine whether increasing the axial velocity at the ARP could produce a significant increase in filter flux. The authors conducted the tests by preparing slurries containing 6.6 M sodium Salt Batch 6 supernate and 2.5 g MST/L, processing the slurry through a bench-scale crossflow filter unit at varying axial velocity and TMP, and measuring filter flux as a function of time.« less
Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.M.; Rogan, Peter K.
2017-01-01
Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations. PMID:29026522
Use of Retrievable Compared to Permanent Inferior Vena Cava Filters: A Single-Institution Experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Thuong G. Van; Chien, Andy S.; Funaki, Brian S.
The purpose of this study was to review the use, safety, and efficacy of retrievable inferior vena cava (IVC) filters in their first 5 years of availability at our institution. Comparison was made with permanent filters placed in the same period. A retrospective review of IVC filter implantations was performed from September, 1999, to September, 2004, in our department. These included both retrievable and permanent filters. The Recovery nitinol and Guenther tulip filters were used as retrievable filters. The frequency of retrievable filter used was calculated. Clinical data and technical data related to filter placement were reviewed. Outcomes, including pulmonarymore » embolism, complications associated with placement, retrieval, or indwelling, were calculated. During the study period, 604 IVC filters were placed. Of these, 97 retrievable filters (16%) were placed in 96 patients. There were 53 Recovery filter and 44 Tulip filter insertions. Subjects were 59 women and 37 men; the mean age was 52 years, with a range of from 18 to 97 years. The placement of retrievable filters increased from 2% in year 1 to 32% in year 5 of the study period. The total implantation time for the permanent group was 145,450 days, with an average of 288 days (range, 33-1811 days). For the retrievable group, the total implantation time was 21,671 days, with an average of 226 days (range, 2-1217 days). Of 29 patients who returned for filter retrieval, the filter was successfully removed in 28. There were 14 of 14 successful Tulip filter retrievals and 14 of 15 successful Recovery filter retrievals. In one patient, after an indwelling period of 39 days, a Recovery nitinol filter could not be removed secondary to a large clot burden within the filter. For the filters that were removed, the mean dwell time was 50 days for the Tulip type and 20 days for the Recovery type. Over the follow-up period there was an overall PE incidence of 1.4% for the permanent group and 1% for the retrieval group. In conclusion, there was an increase in the use of retrievable filters over the study period and an overall increase in the total number of filters implanted. The increased use of these filters appeared to be due to expanded indications predicated by their retrievability. Placement and retrieval of these filters have a low risk of complications, and retrievable filters appeared effective, as there was low rate of clinically significant pulmonary embolism associated with these filters during their indwelling time.« less
FAF-Drugs2: free ADME/tox filtering tool to assist drug discovery and chemical biology projects.
Lagorce, David; Sperandio, Olivier; Galons, Hervé; Miteva, Maria A; Villoutreix, Bruno O
2008-09-24
Drug discovery and chemical biology are exceedingly complex and demanding enterprises. In recent years there are been increasing awareness about the importance of predicting/optimizing the absorption, distribution, metabolism, excretion and toxicity (ADMET) properties of small chemical compounds along the search process rather than at the final stages. Fast methods for evaluating ADMET properties of small molecules often involve applying a set of simple empirical rules (educated guesses) and as such, compound collections' property profiling can be performed in silico. Clearly, these rules cannot assess the full complexity of the human body but can provide valuable information and assist decision-making. This paper presents FAF-Drugs2, a free adaptable tool for ADMET filtering of electronic compound collections. FAF-Drugs2 is a command line utility program (e.g., written in Python) based on the open source chemistry toolkit OpenBabel, which performs various physicochemical calculations, identifies key functional groups, some toxic and unstable molecules/functional groups. In addition to filtered collections, FAF-Drugs2 can provide, via Gnuplot, several distribution diagrams of major physicochemical properties of the screened compound libraries. We have developed FAF-Drugs2 to facilitate compound collection preparation, prior to (or after) experimental screening or virtual screening computations. Users can select to apply various filtering thresholds and add rules as needed for a given project. As it stands, FAF-Drugs2 implements numerous filtering rules (23 physicochemical rules and 204 substructure searching rules) that can be easily tuned.
Paredes, E; Perez, S; Rodil, R; Quintana, J B; Beiras, R
2014-06-01
Due to the concern about the negative effects of exposure to sunlight, combinations of UV filters like 4-Methylbenzylidene-camphor (4-MBC), Benzophenone-3 (BP-3), Benzophenone-4 (BP-4) and 2-Ethylhexyl-4-methoxycinnamate (EHMC) are being introduced in all kind of cosmetic formulas. These chemicals are acquiring a concerning status due to their increasingly common use and the potential risk for the environment. The aim of this study is to assess the behaviour of these compounds in seawater, the toxicity to marine organisms from three trophic levels including autotrophs (Isochrysis galbana), herbivores (Mytilus galloprovincialis and Paracentrotus lividus) and carnivores (Siriella armata), and set a preliminary assessment of potential ecological risk of UV filters in coastal ecosystems. In general, EC50 results show that both EHMC and 4-MBC are the most toxic for our test species, followed by BP-3 and finally BP-4. The most affected species by the presence of these UV filters are the microalgae I. galbana, which showed toxicity thresholds in the range of μg L(-1) units, followed by S. armata>P. Lividus>M. galloprovincialis. The UV filter concentrations measured in the sampled beach water were in the range of tens or even hundreds of ng L(-1). The resulting risk quotients showed appreciable environmental risk in coastal environments for BP-3 and 4-MBC. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young
2016-02-15
Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuirmore » probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H{sup −} populations for various filter field strengths and pressures. Enhanced H{sup −} population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H{sup −} sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.« less
Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.
Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel
2018-08-01
Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
NASA Astrophysics Data System (ADS)
Gómez Valverde, Juan J.; Ortuño, Juan E.; Guerra, Pedro; Hermann, Boris; Zabihian, Behrooz; Rubio-Guivernau, José L.; Santos, Andrés.; Drexler, Wolfgang; Ledesma-Carbayo, Maria J.
2015-07-01
Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we propose a new speckle reduction process and compare it with various denoising filters with high edge-preserving potential, using several sets of dermatological OCT B-scans. To validate the performance we used a custom-designed spectral domain OCT and two different data set groups. The first group consisted in five datasets of a single B-scan captured N times (with N<20), the second were five 3D volumes of 25 Bscans. As quality metrics we used signal to noise (SNR), contrast to noise (CNR) and equivalent number of looks (ENL) ratios. Our results show that a process based on a combination of a 2D enhanced sigma digital filter and a wavelet compounding method achieves the best results in terms of the improvement of the quality metrics. In the first group of individual B-scans we achieved improvements in SNR, CNR and ENL of 16.87 dB, 2.19 and 328 respectively; for the 3D volume datasets the improvements were 15.65 dB, 3.44 and 1148. Our results suggest that the proposed enhancement process may significantly reduce speckle, increasing SNR, CNR and ENL and reducing the number of extra acquisitions of the same frame.
Yim, Young-Sun; Davis, Georgia L.; Duru, Ngozi A.; Musket, Theresa A.; Linton, Eric W.; Messing, Joachim W.; McMullen, Michael D.; Soderlund, Carol A.; Polacco, Mary L.; Gardiner, Jack M.; Coe, Edward H.
2002-01-01
Three maize (Zea mays) bacterial artificial chromosome (BAC) libraries were constructed from inbred line B73. High-density filter sets from all three libraries, made using different restriction enzymes (HindIII, EcoRI, and MboI, respectively), were evaluated with a set of complex probes including the185-bp knob repeat, ribosomal DNA, two telomere-associated repeat sequences, four centromere repeats, the mitochondrial genome, a multifragment chloroplast DNA probe, and bacteriophage λ. The results indicate that the libraries are of high quality with low contamination by organellar and λ-sequences. The use of libraries from multiple enzymes increased the chance of recovering each region of the genome. Ninety maize restriction fragment-length polymorphism core markers were hybridized to filters of the HindIII library, representing 6× coverage of the genome, to initiate development of a framework for anchoring BAC contigs to the intermated B73 × Mo17 genetic map and to mark the bin boundaries on the physical map. All of the clones used as hybridization probes detected at least three BACs. Twenty-two single-copy number core markers identified an average of 7.4 ± 3.3 positive clones, consistent with the expectation of six clones. This information is integrated into fingerprinting data generated by the Arizona Genomics Institute to assemble the BAC contigs using fingerprint contig and contributed to the process of physical map construction. PMID:12481051
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters
NASA Astrophysics Data System (ADS)
Samuyelu, Bommu; Rajesh Kumar, Pullakura
2017-12-01
This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.
Fine PM measurements: personal and indoor air monitoring.
Jantunen, M; Hänninen, O; Koistinen, K; Hashim, J H
2002-12-01
This review compiles personal and indoor microenvironment particulate matter (PM) monitoring needs from recently set research objectives, most importantly the NRC published "Research Priorities for Airborne Particulate Matter (1998)". Techniques and equipment used to monitor PM personal exposures and microenvironment concentrations and the constituents of the sampled PM during the last 20 years are then reviewed. Development objectives are set and discussed for personal and microenvironment PM samplers and monitors, for filter materials, and analytical laboratory techniques for equipment calibration, filter weighing and laboratory climate control. The progress is leading towards smaller sample flows, lighter, silent, independent (battery powered) monitors with data logging capacity to store microenvironment or activity relevant sensor data, advanced flow controls and continuous recording of the concentration. The best filters are non-hygroscopic, chemically pure and inert, and physically robust against mechanical wear. Semiautomatic and primary standard equivalent positive displacement flow meters are replacing the less accurate methods in flow calibration, and also personal sampling flow rates should become mass flow controlled (with or without volumetric compensation for pressure and temperature changes). In the weighing laboratory the alternatives are climatic control (set temperature and relative humidity), and mechanically simpler thermostatic heating, air conditioning and dehumidification systems combined with numerical control of temperature, humidity and pressure effects on flow calibration and filter weighing.
Seo, Soo Hong; Kim, Jae Hwan; Kim, Ji Woong; Kye, Young Chul; Ahn, Hyo Hyun
2011-02-01
Digital photography can be used to measure skin color colorimetrically when combined with proper techniques. To better understand the settings of digital photography for the evaluation and measurement of skin colors, we used a tungsten lamp with filters and the custom white balance (WB) function of a digital camera. All colored squares on a color chart were photographed with each original and filtered light, analyzed into CIELAB coordinates to produce the calibration method for each given light setting, and compared statistically with reference coordinates obtained using a reflectance spectrophotometer. They were summarized as to the typical color groups, such as skin colors. We compared these results according to the fixed vs. custom WB of a digital camera. The accuracy of color measurement was improved when using light with a proper color temperature conversion filter. The skin colors from color charts could be measured more accurately using a fixed WB. In vivo measurement of skin color was easy and possible with our method and settings. The color temperature conversion filter that produced daylight-like light from the tungsten lamp was the best choice when combined with fixed WB for the measurement of colors and acceptable photographs. © 2010 John Wiley & Sons A/S.
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal to an integral transform of the original projection data with the kernel function being the Abel transform of the filtering function. The kernel function is independent of the projection data and can be obtained separately when the filtering function is selected. Users can select the best filtering function for a particular set of experimental data. When the kernal function is obtained, it can be used repeatedly to a number of projection data sets (rovs) from the same experiment. When an entire flame image that contains a large number of projection lines needs to be processed, the new approach significantly reduces computational effort in comparison with the conventional approach in which each projection data set is deconvoluted separately. Computer codes have been developed to perform the filter Abel transform for an entire flame field. Measured soot volume fraction data of a jet diffusion flame are processed as an example.
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
Effects of training set selection on pain recognition via facial expressions
NASA Astrophysics Data System (ADS)
Shier, Warren A.; Yanushkevich, Svetlana N.
2016-07-01
This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.
NASA Astrophysics Data System (ADS)
Demro, James C.; Hartshorne, Richard; Woody, Loren M.; Levine, Peter A.; Tower, John R.
1995-06-01
The next generation Wedge Imaging Spectrometer (WIS) instruments currently in integration at Hughes SBRD incorporate advanced features to increase operation flexibility for remotely sensed hyperspectral imagery collection and use. These features include: a) multiple linear wedge filters to tailor the spectral bands to the scene phenomenology; b) simple, replaceable fore-optics to allow different spatial resolutions and coverages; c) data acquisition system (DAS) that collects the full data stream simultaneously from both WIS instruments (VNIR and SWIR/MWIR), stores the data in a RAID storage, and provides for down-loading of the data to MO disks; the WIS DAS also allows selection of the spectral band sets to be stored; d) high-performance VNIR camera subsystem based upon a 512 X 512 CCD area array and associated electronics.
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro
1995-02-01
Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.
High-throughput single-molecule force spectroscopy for membrane proteins
NASA Astrophysics Data System (ADS)
Bosshart, Patrick D.; Casagrande, Fabio; Frederix, Patrick L. T. M.; Ratera, Merce; Bippes, Christian A.; Müller, Daniel J.; Palacin, Manuel; Engel, Andreas; Fotiadis, Dimitrios
2008-09-01
Atomic force microscopy-based single-molecule force spectroscopy (SMFS) is a powerful tool for studying the mechanical properties, intermolecular and intramolecular interactions, unfolding pathways, and energy landscapes of membrane proteins. One limiting factor for the large-scale applicability of SMFS on membrane proteins is its low efficiency in data acquisition. We have developed a semi-automated high-throughput SMFS (HT-SMFS) procedure for efficient data acquisition. In addition, we present a coarse filter to efficiently extract protein unfolding events from large data sets. The HT-SMFS procedure and the coarse filter were validated using the proton pump bacteriorhodopsin (BR) from Halobacterium salinarum and the L-arginine/agmatine antiporter AdiC from the bacterium Escherichia coli. To screen for molecular interactions between AdiC and its substrates, we recorded data sets in the absence and in the presence of L-arginine, D-arginine, and agmatine. Altogether ~400 000 force-distance curves were recorded. Application of coarse filtering to this wealth of data yielded six data sets with ~200 (AdiC) and ~400 (BR) force-distance spectra in each. Importantly, the raw data for most of these data sets were acquired in one to two days, opening new perspectives for HT-SMFS applications.
Compressively Characterizing High-Dimensional Entangled States with Complementary, Random Filtering
2016-06-30
halves of the SLM, respectively. The signal and idler fields are routed to separate digital micromirror devices (DMDs) via a 500-mm lens and a 50=50 beam...are a topic of future research. Figure 4(a) shows slices of the joint-position reconstruction along the signal axis, where each curve corresponds to...in Fig. 5 as a function of measurement number. Different curves correspond to increased levels of thresholding, setting values below a percentage of
Aycock, Kenneth I; Campbell, Robert L; Manning, Keefe B; Craven, Brent A
2017-06-01
Inferior vena cava (IVC) filters are medical devices designed to provide a mechanical barrier to the passage of emboli from the deep veins of the legs to the heart and lungs. Despite decades of development and clinical use, IVC filters still fail to prevent the passage of all hazardous emboli. The objective of this study is to (1) develop a resolved two-way computational model of embolus transport, (2) provide verification and validation evidence for the model, and (3) demonstrate the ability of the model to predict the embolus-trapping efficiency of an IVC filter. Our model couples computational fluid dynamics simulations of blood flow to six-degree-of-freedom simulations of embolus transport and resolves the interactions between rigid, spherical emboli and the blood flow using an immersed boundary method. Following model development and numerical verification and validation of the computational approach against benchmark data from the literature, embolus transport simulations are performed in an idealized IVC geometry. Centered and tilted filter orientations are considered using a nonlinear finite element-based virtual filter placement procedure. A total of 2048 coupled CFD/6-DOF simulations are performed to predict the embolus-trapping statistics of the filter. The simulations predict that the embolus-trapping efficiency of the IVC filter increases with increasing embolus diameter and increasing embolus-to-blood density ratio. Tilted filter placement is found to decrease the embolus-trapping efficiency compared with centered filter placement. Multiple embolus-trapping locations are predicted for the IVC filter, and the trapping locations are predicted to shift upstream and toward the vessel wall with increasing embolus diameter. Simulations of the injection of successive emboli into the IVC are also performed and reveal that the embolus-trapping efficiency decreases with increasing thrombus load in the IVC filter. In future work, the computational tool could be used to investigate IVC filter design improvements, the effect of patient anatomy on embolus transport and IVC filter embolus-trapping efficiency, and, with further development and validation, optimal filter selection and placement on a patient-specific basis.
A comparative study: classification vs. user-based collaborative filtering for clinical prediction.
Hao, Fang; Blair, Rachael Hageman
2016-12-08
Recommender systems have shown tremendous value for the prediction of personalized item recommendations for individuals in a variety of settings (e.g., marketing, e-commerce, etc.). User-based collaborative filtering is a popular recommender system, which leverages an individuals' prior satisfaction with items, as well as the satisfaction of individuals that are "similar". Recently, there have been applications of collaborative filtering based recommender systems for clinical risk prediction. In these applications, individuals represent patients, and items represent clinical data, which includes an outcome. Application of recommender systems to a problem of this type requires the recasting a supervised learning problem as unsupervised. The rationale is that patients with similar clinical features carry a similar disease risk. As the "Big Data" era progresses, it is likely that approaches of this type will be reached for as biomedical data continues to grow in both size and complexity (e.g., electronic health records). In the present study, we set out to understand and assess the performance of recommender systems in a controlled yet realistic setting. User-based collaborative filtering recommender systems are compared to logistic regression and random forests with different types of imputation and varying amounts of missingness on four different publicly available medical data sets: National Health and Nutrition Examination Survey (NHANES, 2011-2012 on Obesity), Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT), chronic kidney disease, and dermatology data. We also examined performance using simulated data with observations that are Missing At Random (MAR) or Missing Completely At Random (MCAR) under various degrees of missingness and levels of class imbalance in the response variable. Our results demonstrate that user-based collaborative filtering is consistently inferior to logistic regression and random forests with different imputations on real and simulated data. The results warrant caution for the collaborative filtering for the purpose of clinical risk prediction when traditional classification is feasible and practical. CF may not be desirable in datasets where classification is an acceptable alternative. We describe some natural applications related to "Big Data" where CF would be preferred and conclude with some insights as to why caution may be warranted in this context.
Choosing and using methodological search filters: searchers' views.
Beale, Sophie; Duffy, Steven; Glanville, Julie; Lefebvre, Carol; Wright, Dianne; McCool, Rachael; Varley, Danielle; Boachie, Charles; Fraser, Cynthia; Harbour, Jenny; Smith, Lynne
2014-06-01
Search filters or hedges are search strategies developed to assist information specialists and librarians to retrieve different types of evidence from bibliographic databases. The objectives of this project were to learn about searchers' filter use, how searchers choose search filters and what information they would like to receive to inform their choices. Interviews with information specialists working in, or for, the National Institute for Health and Care Excellence (NICE) were conducted. An online questionnaire survey was also conducted and advertised via a range of email lists. Sixteen interviews were undertaken and 90 completed questionnaires were received. The use of search filters tends to be linked to reducing a large amount of literature, introducing focus and assisting with searches that are based on a single study type. Respondents use numerous ways to identify search filters and can find choosing between different filters problematic because of knowledge gaps and lack of time. Search filters are used mainly for reducing large result sets (introducing focus) and assisting with searches focused on a single study type. Features that would help with choosing filters include making information about filters less technical, offering ratings and providing more detail about filter validation strategies and filter provenance. © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Group.
Frequency modulation television analysis: Distortion analysis
NASA Technical Reports Server (NTRS)
Hodge, W. H.; Wong, W. H.
1973-01-01
Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.
Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J
2017-09-01
Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.
Sutphin, Patrick D; Reis, Stephen P; McKune, Angie; Ravanzo, Maria; Kalva, Sanjeeva P; Pillai, Anil K
2015-04-01
To design a sustainable process to improve optional inferior vena cava (IVC) filter retrieval rates based on the Define, Measure, Analyze, Improve, Control (DMAIC) methodology of the Six Sigma process improvement paradigm. DMAIC, an acronym for Define, Measure, Analyze, Improve, and Control, was employed to design and implement a quality improvement project to increase IVC filter retrieval rates at a tertiary academic hospital. Retrievable IVC filters were placed in 139 patients over a 2-year period. The baseline IVC filter retrieval rate (n = 51) was reviewed through a retrospective analysis, and two strategies were devised to improve the filter retrieval rate: (a) mailing of letters to clinicians and patients for patients who had filters placed within 8 months of implementation of the project (n = 43) and (b) a prospective automated scheduling of a clinic visit at 4 weeks after filter placement for all new patients (n = 45). The effectiveness of these strategies was assessed by measuring the filter retrieval rates and estimated increase in revenue to interventional radiology. IVC filter retrieval rates increased from a baseline of 8% to 40% with the mailing of letters and to 52% with the automated scheduling of a clinic visit 4 weeks after IVC filter placement. The estimated revenue per 100 IVC filters placed increased from $2,249 to $10,518 with the mailing of letters and to $17,022 with the automated scheduling of a clinic visit. Using the DMAIC methodology, a simple and sustainable quality improvement intervention was devised that markedly improved IVC filter retrieval rates in eligible patients. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.
Viegas, Carla; Faria, Tiago; de Oliveira, Ana Cebola; Caetano, Liliana Aranha; Carolino, Elisabete; Quintal-Gomes, Anita; Twarużek, Magdalena; Kosicki, Robert; Soszczyńska, Ewelina; Viegas, Susana
2017-11-01
The waste management industry is an important employer, and exposure of waste-handling workers to microorganisms is considered an occupational health problem. Besides fungal contamination, it is important to consider the co-occurrence of mycotoxins in this setting. Forklifts with closed cabinet and air conditioner are commonly used in waste industry to transport waste and other products within the facilities, possibly increasing the risk of exposure under certain conditions. The aim of this study was to assess the fungal contamination and mycotoxin levels in filters from the air conditioning system of forklift cabinets, as an indicator to assess occupational exposure of the drivers working in a waste sorting facility. Cytotoxicity was also assessed to understand and characterize the toxicity of the complex mixtures as present in the forklift filters. Aqueous extracts of filters from 11 vehicles were streaked onto 2% malt extract agar (MEA) with chloramphenicol (0.05 g/L) media, and in dichloran glycerol (DG18) agar-based media for morphological identification of the mycobiota. Real-time quantitative PCR amplification of genes from Aspergillus sections Fumigati, Flavi, Circumdati, and Versicolores was also performed. Mycotoxins were analyzed using LC-MS/MS system. Cytotoxicity of filter extracts was analyzed by using a MTT cell culture test. Aspergillus species were found most frequently, namely Aspergillus sections Circumdati (MEA 48%; DG18 41%) and Nigri (MEA 32%; DG18 17.3%). By qPCR, only Aspergillus section Fumigati species were found, but positive results were obtained for all assessed filters. No mycotoxins were detected in aqueous filter extracts, but most extracts were highly cytotoxic (n = 6) or medium cytotoxic (n = 4). Although filter service life and cytotoxicity were not clearly correlated, the results suggest that observing air conditioner filter replacement frequency may be a critical aspect to avoid worker's exposure. Further research is required to check if the environmental conditions as present in the filters could allow the production of mycotoxins and their dissemination in the cabinet during the normal use of the vehicles.
Texture classification of normal tissues in computed tomography using Gabor filters
NASA Astrophysics Data System (ADS)
Dettori, Lucia; Bashir, Alia; Hasemann, Julie
2007-03-01
The research presented in this article is aimed at developing an automated imaging system for classification of normal tissues in medical images obtained from Computed Tomography (CT) scans. Texture features based on a bank of Gabor filters are used to classify the following tissues of interests: liver, spleen, kidney, aorta, trabecular bone, lung, muscle, IP fat, and SQ fat. The approach consists of three steps: convolution of the regions of interest with a bank of 32 Gabor filters (4 frequencies and 8 orientations), extraction of two Gabor texture features per filter (mean and standard deviation), and creation of a Classification and Regression Tree-based classifier that automatically identifies the various tissues. The data set used consists of approximately 1000 DIACOM images from normal chest and abdominal CT scans of five patients. The regions of interest were labeled by expert radiologists. Optimal trees were generated using two techniques: 10-fold cross-validation and splitting of the data set into a training and a testing set. In both cases, perfect classification rules were obtained provided enough images were available for training (~65%). All performance measures (sensitivity, specificity, precision, and accuracy) for all regions of interest were at 100%. This significantly improves previous results that used Wavelet, Ridgelet, and Curvelet texture features, yielding accuracy values in the 85%-98% range The Gabor filters' ability to isolate features at different frequencies and orientations allows for a multi-resolution analysis of texture essential when dealing with, at times, very subtle differences in the texture of tissues in CT scans.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin, E-mail: justin.solomon@duke.edu; Wilson, Joshua; Samei, Ehsan
2015-08-15
Purpose: The purpose of this work was to assess the inherent image quality characteristics of a new multidetector computed tomography system in terms of noise, resolution, and detectability index as a function of image acquisition and reconstruction for a range of clinically relevant settings. Methods: A multisized image quality phantom (37, 30, 23, 18.5, and 12 cm physical diameter) was imaged on a SOMATOM Force scanner (Siemens Medical Solutions) under variable dose, kVp, and tube current modulation settings. Images were reconstructed with filtered back projection (FBP) and with advanced modeled iterative reconstruction (ADMIRE) with iterative strengths of 3, 4, andmore » 5. Image quality was assessed in terms of the noise power spectrum (NPS), task transfer function (TTF), and detectability index for a range of detection tasks (contrasts of approximately 45, 90, 300, −900, and 1000 HU, and 2–20 mm diameter) based on a non-prewhitening matched filter model observer with eye filter. Results: Image noise magnitude decreased with decreasing phantom size, increasing dose, and increasing ADMIRE strength, offering up to 64% noise reduction relative to FBP. Noise texture in terms of the NPS was similar between FBP and ADMIRE (<5% shift in peak frequency). The resolution, based on the TTF, improved with increased ADMIRE strength by an average of 15% in the TTF 50% frequency for ADMIRE-5. The detectability index increased with increasing dose and ADMIRE strength by an average of 55%, 90%, and 163% for ADMIRE 3, 4, and 5, respectively. Assessing the impact of mA modulation for a fixed average dose over the length of the phantom, detectability was up to 49% lower in smaller phantom sections and up to 26% higher in larger phantom sections for the modulated scan compared to a fixed tube current scan. Overall, the detectability exhibited less variability with phantom size for modulated scans compared to fixed tube current scans. Conclusions: Image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose. The use of tube current modulation resulted in more consistent image quality with changing phantom size.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, T; Graham, C L; Sundsmo, T
This procedure provides instructions for the calibration and use of the Canberra iSolo Low Background Alpha/Beta Counting System (iSolo) that is used for counting air filters and swipe samples. This detector is capable of providing radioisotope identification (e.g., it can discriminate between radon daughters and plutonium). This procedure includes step-by-step instructions for: (1) Performing periodic or daily 'Background' and 'Efficiency QC' checks; (2) Setting-up the iSolo for counting swipes and air filters; (3) Counting swipes and air filters for alpha and beta activity; and (4) Annual calibration.
A System for Compressive Spectral and Polarization Imaging at Short Wave Infrared (SWIR) Wavelengths
2017-10-18
2016). H. Rueda, H. Arguello and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging”, Journal...3) a set of optical filters which allow to discriminate spectrally the coded and sheared...system that includes objective lens, spatial light modulator, dispersive element, optical filters
Loudspeaker equalization for auditory research.
MacDonald, Justin A; Tran, Phuong K
2007-02-01
The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.
In situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
2000-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhipa, Mayur Kumar, E-mail: mayurchhipa1@gmail.com
2014-10-15
In this paper, we have proposed a new design of tunable two dimensional (2D) photonic crystal (PhC) channel drop filter (CDF) using ring resonators. The increasing interest in photonic integrated circuits (PIC's) and the increasing use of all-optical fiber networks as backbones for global communication systems have been based in large part on the extremely wide optical transmission bandwidth provided by dielectric materials. Based on the analysis we present novel photonic crystal channel drop filters. Simulations demonstrate that these filters exhibit ideal transfer characteristics. Channel dropping filters (CDF's) that access one channel of a wavelength division multiplexed (WDM) signal whilemore » leaving other channels undisturbed are essential components of PIC's and optical communication systems. In this paper we have investigated such parameters which have an effect on resonant wavelength in this Channel Drop Filter, such as dielectric constant of inner, coupling, adjacent and whole rods of the structure. The dimensions of these structures are taken as 20a×19a and the area of the proposed structure is about 125.6μm{sup 2}; therefore this structure can be used in the future photonic integrated circuits. While using this design the dropping efficiency at the resonance of single ring are 100%. The spectrum of the power transmission is obtained with finite difference time domain (FDTD) method. FDTD method is the most famous method for PhC analysis. In this paper the dielectric rods have a dielectric constant of 10.65, so the refractive index is 3.26 and radius r=0.213a is located in air, where a is a lattice constant. In this we have used five scatter rods for obtaining more coupling efficiency; radius of scatter rods is set to 0.215a. The proposed structure is simulated with OptiFDTD.v.8.0 software, the different dielectric constant of rods equal to ε{sub r}−0.4, ε{sub r} and ε{sub r}+0.4 at wavelength of 1570 nm.« less
[Method for concentration determination of mineral-oil fog in the air of workplace].
Xu, Min; Zhang, Yu-Zeng; Liu, Shi-Feng
2008-05-01
To study the method of concentration determination of mineral-oil fog in the air of workplace. Four filter films such as synthetic fabric filter film, beta glass fiber filter film, chronic filter paper and microporous film were used in this study. Two kinds of dust samplers were used to collect the sample, one sampling at fast flow rate in a short time and the other sampling at slow flow rate with long duration. Subsequently, the filter membrane was weighed with electronic analytical balance. According to sampling efficiency and incremental size, the adsorbent ability of four different filter membranes was compared. When the flow rate was between 10 approximately 20 L/min and the sampling time was between 10 approximately 15 min, the average sampling efficiency of synthetic fabric filter film was 95.61% and the increased weight ranged from 0.87 to 2.60 mg. When the flow rate was between 10 approximately 20 L/min and sampling time was between 10 approximately 15 min, the average sampling efficiency of beta glass fiber filter film was 97.57% and the increased weight was 0.75 approximately 2.47 mg. When the flow rate was between 5 approximately 10 L/min and the sampling time between 10 approximately 20 min, the average sampling efficiency of chronic filter paper and microporous film was 48.94% and 63.15%, respectively and the increased weight was 0.75 approximately 2.15 mg and 0.23 approximately 0.85 mg, respectively. When the flow rate was 3.5 L/min and the sampling time was between 100 approximately 166 min, the average sampling efficiency of filter film were 94.44% and 93.45%, respectively and the average increased weight was 1.28 mg for beta glass fiber filter film and 0.78 mg for beta glass fiber filter film and synthetic fabric synthetic fabric filter film. The average sampling efficiency of chronic filter paper and microporous film were 37.65% and 88.21%, respectively. The average increased weight was 4.30 mg and 1.23 mg, respectively. Sampling with synthetic fabric filter film and beta glass fiber filter film is credible, accurate, simple and feasible for determination of the concentration of mineral-oil fog in workplaces.
How to find and type red/brown dwarf stars in near-infrared imaging space observatories
NASA Astrophysics Data System (ADS)
Willemn Holwerda, Benne; Ryan, Russell; Bridge, Joanna; Pirzkal, Nor; Kenworthy, Matthew; Andersen, Morten; Wilkins, Stephen; Trenti, Michele; Meshkat, Tiffany; Bernard, Stephanie; Smit, Renske
2018-01-01
Here we evaluate the near-infrared colors of brown dwarfs as observed with four major infrared imaging space observatories: the Hubble Space Telescope (HST), the James Webb Space Telescope (JWST), the EUCLID mission, and the WFIRST telescope. We use the splat ISPEX spectroscopic library to map out the colors of the M, L, and T-type brown dwarfs. We identify which color-color combination is optimal for identifying broad type and which single color is optimal to then identify the subtype (e.g., T0-9). We evaluate each observatory separately as well as the the narrow-field (HST and JWST) and wide-field (EULID and WFIRST) combinations.HST filters used thus far for high-redshift searches (e.g. CANDELS and BoRG) are close to optimal within the available filter combinations. A clear improvement over HST is one of two broad/medium filter combinations on JWST: pairing F140M with either F150W or F162M discriminates well between brown dwarf subtypes. The improvement of JWST the filter set over the HST one is so marked that any combination of HST and JWST filters does not improve the classification.The EUCLID filter set alone performs poorly in terms of typing brown dwarfs and WFIRST performs only marginally better, despite a wider selection of filters. A combined EUCLID and WFIRST observation, using WFIRST's W146 and F062 and EUCLID's Y-band, allows for a much better discrimination between broad brown dwarf categories. In this respect, WFIRST acts as a targeted follow-up observatory for the all-sky EUCLID survey. However, subsequent subtyping with the combination of EUCLID and WFIRST observations remains uncertain due to the lack of medium or narrow-band filters in this wavelength range. We argue that a medium band added to the WFIRST filter selection would greatly improve its ability to preselect against brown dwarfs in high-latitude surveys.
Photobleaching of red fluorescence in oral biofilms.
Hope, C K; de Josselin de Jong, E; Field, M R T; Valappil, S P; Higham, S M
2011-04-01
Many species of oral bacteria can be induced to fluoresce due to the presence of endogenous porphyrins, a phenomenon that can be utilized to visualize and quantify dental plaque in the laboratory or clinical setting. However, an inevitable consequence of fluorescence is photobleaching, and the effects of this on longitudinal, quantitative analysis of dental plaque have yet to be ascertained. Filter membrane biofilms were grown from salivary inocula or single species (Prevotella nigrescens and Prevotella intermedia). The mature biofilms were then examined in a custom-made lighting rig comprising 405 nm light-emitting diodes capable of delivering 220 W/m(2) at the sample, an appropriate filter and a digital camera; a set-up analogous to quantitative light-induced fluorescence digital. Longitudinal sets of images were captured and processed to assess the degradation in red fluorescence over time. Photobleaching was observed in all instances. The highest rates of photobleaching were observed immediately after initiation of illumination, specifically during the first minute. Relative rates of photobleaching during the first minute of exposure were 19.17, 13.72 and 3.43 arbitrary units/min for P. nigrescens biofilms, microcosm biofilm and P. intermedia biofilms, respectively. Photobleaching could be problematic when making quantitative measurements of porphyrin fluorescence in situ. Reducing both light levels and exposure time, in combination with increased camera sensitivity, should be the default approach when undertaking analyses by quantitative light-induced fluorescence digital. © 2010 John Wiley & Sons A/S.
3D-FFT for Signature Detection in LWIR Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medvick, Patricia A.; Lind, Michael A.; Mackey, Patrick S.
Improvements in analysis detection exploitation are possible by applying whitened matched filtering within the Fourier domain to hyperspectral data cubes. We describe an implementation of a Three Dimensional Fast Fourier Transform Whitened Matched Filter (3DFFTMF) approach and, using several example sets of Long Wave Infra Red (LWIR) data cubes, compare the results with those from standard Whitened Matched Filter (WMF) techniques. Since the variability in shape of gaseous plumes precludes the use of spatial conformation in the matched filtering, the 3DFFTMF results were similar to those of two other WMF methods. Including a spatial low-pass filter within the Fourier spacemore » can improve signal to noise ratios and therefore improve detection limit by facilitating the mitigation of high frequency clutter. The improvement only occurs if the low-pass filter diameter is smaller than the plume diameter.« less
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
Park, Sang-Hoon; Lee, David; Lee, Sang-Goog
2018-02-01
For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
Starbase Data Tables: An ASCII Relational Database for Unix
NASA Astrophysics Data System (ADS)
Roll, John
2011-11-01
Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.
A general transfer-function approach to noise filtering in open-loop quantum control
NASA Astrophysics Data System (ADS)
Viola, Lorenza
2015-03-01
Hamiltonian engineering via unitary open-loop quantum control provides a versatile and experimentally validated framework for manipulating a broad class of non-Markovian open quantum systems of interest, with applications ranging from dynamical decoupling and dynamically corrected quantum gates, to noise spectroscopy and quantum simulation. In this context, transfer-function techniques directly motivated by control engineering have proved invaluable for obtaining a transparent picture of the controlled dynamics in the frequency domain and for quantitatively analyzing performance. In this talk, I will show how to identify a computationally tractable set of ``fundamental filter functions,'' out of which arbitrary filter functions may be assembled up to arbitrary high order in principle. Besides avoiding the infinite recursive hierarchy of filter functions that arises in general control scenarios, this fundamental set suffices to characterize the error suppression capabilities of the control protocol in both the time and frequency domain. I will show, in particular, how the resulting notion of ``filtering order'' reveals conceptually distinct, albeit complementary, features of the controlled dynamics as compared to the ``cancellation order,'' traditionally defined in the Magnus sense. Implications for current quantum control experiments will be discussed. Work supported by the U.S. Army Research Office under Contract No. W911NF-14-1-0682.
Predicting online ratings based on the opinion spreading process
NASA Astrophysics Data System (ADS)
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
Using focused plenoptic cameras for rich image capture.
Georgiev, T; Lumsdaine, A; Chunev, G
2011-01-01
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
Trickling Filters. Instructor's Guide. Biological Treatment Process Control.
ERIC Educational Resources Information Center
Richwine, Reynold D.
This instructor's guide contains materials needed for teaching a two-lesson unit on trickling filters. These materials include: (1) an overview of the two lessons; (2) lesson plans; (3) lecture outline (keyed to a set of slides accompanying the unit); (4) overhead transparency masters; (5) student worksheet (with answers); and (6) two copies of a…
Ruppelt, Jan P; Tondera, Katharina; Schreiber, Christiane; Kistemann, Thomas; Pinnekamp, Johannes
2018-05-01
Combined sewer overflows (CSOs) introduce numerous pathogens from fecal contamination, such as bacteria and viruses, into surface waters, thus endangering human health. In Germany, retention soil filters (RSFs) treat CSOs at sensitive discharge points and can contribute to reducing these hygienically relevant microorganisms. In this study, we evaluated the extent of how dry period, series connection and filter layer thickness influence the reduction efficiency of RSFs for Escherichia coli (E. coli), intestinal enterococci (I. E.) and somatic coliphages. To accomplish this, we had four pilot scale RSFs built on a test field at the wastewater treatment plant Aachen-Soers. While two filters were replicates, the other two filters were installed in a series connection. Moreover, one filter had a thinner filtration layer than the other three. Between April 2015 and December 2016, the RSFs were loaded in 37 trials with pre-conditioned CSO after dry periods ranging from 4 to 40 days. During 17 trials, samples for microbial analysis were taken and analyzed. The series connection of two filters showed that the removal increases when two systems with a filter layer of the same height are operated in series. Since the microorganisms are exposed twice to the environmental conditions on the filter surface and in the upper filter layers, there is a greater chance for abiotic adsorption increase. The same effect could be shown when filters with different depths were compared: the removal efficiency increases as filter thickness increases. This study provides new evidence that regardless of seasonal effects and dry period, RSFs can improve hygienic situation significantly. Copyright © 2018 Elsevier GmbH. All rights reserved.
Stability of recursive out-of-sequence measurement filters: an open problem
NASA Astrophysics Data System (ADS)
Chen, Lingji; Moshtagh, Nima; Mehra, Raman K.
2011-06-01
In many applications where communication delays are present, measurements with earlier time stamps can arrive out-of-sequence, i.e., after state estimates have been obtained for the current time instant. To incorporate such an Out-Of-Sequence Measurement (OOSM), many algorithms have been proposed in the literature to obtain or approximate the optimal estimate that would have been obtained if the OOSM had arrived in-sequence. When OOSM occurs repeatedly, approximate estimations as a result of incorporating one OOSM have to serve as the basis for incorporating yet another OOSM. The question of whether the "approximation of approximation" is well behaved, i.e., whether approximation errors accumulate in a recursive setting, has not been adequately addressed in the literature. This paper draws attention to the stability question of recursive OOSM processing filters, formulates the problem in a specific setting, and presents some simulation results that suggest that such filters are indeed well-behaved. Our hope is that more research will be conducted in the future to rigorously establish stability properties of these filters.
Parametric adaptive filtering and data validation in the bar GW detector AURIGA
NASA Astrophysics Data System (ADS)
Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.
2002-04-01
We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.
An exponential filter model predicts lightness illusions
Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay
2015-01-01
Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects. PMID:26157381
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
Validation of sterilizing grade filtration.
Jornitz, M W; Meltzer, T H
2003-01-01
Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.
Lee, A.; McVey, J.; Faustino, P.; Lute, S.; Sweeney, N.; Pawar, V.; Khan, M.; Brorson, K.; Hussong, D.
2010-01-01
Filters rated as having a 0.2-μm pore size (0.2-μm-rated filters) are used in laboratory and manufacturing settings for diverse applications of bacterial and particle removal from process fluids, analytical test articles, and gasses. Using Hydrogenophaga pseudoflava, a diminutive bacterium with an unusual geometry (i.e., it is very thin), we evaluated passage through 0.2-μm-rated filters and the impact of filtration process parameters and bacterial challenge density. We show that consistent H. pseudoflava passage occurs through 0.2-μm-rated filters. This is in contrast to an absence of significant passage of nutritionally challenged bacteria that are of similar size (i.e., hydrodynamic diameter) but dissimilar geometry. PMID:19966023
Filtering and polychromatic vision in mantis shrimps: themes in visible and ultraviolet vision.
Cronin, Thomas W; Bok, Michael J; Marshall, N Justin; Caldwell, Roy L
2014-01-01
Stomatopod crustaceans have the most complex and diverse assortment of retinal photoreceptors of any animals, with 16 functional classes. The receptor classes are subdivided into sets responsible for ultraviolet vision, spatial vision, colour vision and polarization vision. Many of these receptor classes are spectrally tuned by filtering pigments located in photoreceptors or overlying optical elements. At visible wavelengths, carotenoproteins or similar substances are packed into vesicles used either as serial, intrarhabdomal filters or lateral filters. A single retina may contain a diversity of these filtering pigments paired with specific photoreceptors, and the pigments used vary between and within species both taxonomically and ecologically. Ultraviolet-filtering pigments in the crystalline cones serve to tune ultraviolet vision in these animals as well, and some ultraviolet receptors themselves act as birefringent filters to enable circular polarization vision. Stomatopods have reached an evolutionary extreme in their use of filter mechanisms to tune photoreception to habitat and behaviour, allowing them to extend the spectral range of their vision both deeper into the ultraviolet and further into the red.
Single and tandem Fabry-Perot etalons as solar background filters for lidar.
McKay, J A
1999-09-20
Atmospheric lidar is difficult in daylight because of sunlight scattered into the receiver field of view. In this research methods for the design and performance analysis of Fabry-Perot etalons as solar background filters are presented. The factor by which the signal to background ratio is enhanced is defined as a measure of the performance of the etalon as a filter. Equations for evaluating this parameter are presented for single-, double-, and triple-etalon filter systems. The role of reflective coupling between etalons is examined and shown to substantially reduce the contributions of the second and third etalons to the filter performance. Attenuators placed between the etalons can improve the filter performance, at modest cost to the signal transmittance. The principal parameter governing the performance of the etalon filters is the etalon defect finesse. Practical limitations on etalon plate smoothness and parallelism cause the defect finesse to be relatively low, especially in the ultraviolet, and this sets upper limits to the capability of tandem etalon filters to suppress the solar background at tolerable cost to the signal.
System for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
2000-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M. Leslie; Taylor, Robert T.
1999-01-01
An improved method for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system for in situ field water remediation.
Cecil, L.D.; Knobel, L.L.; Wegner, S.J.; Moore, L.L.
1989-01-01
Water from four wells completed in the Snake River Plain aquifer was sampled as part of the U.S. Geological Survey 's quality assurance program to evaluate the effect of filtration and preservation methods on strontium-90 concentrations in groundwater at the Idaho National Engineering Laboratory. Water from each well was filtered through either a 0.45-micrometer membrane or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered samples was preserved in the field with reagent-grade hydrochloric acid and the other set of samples was not acidified. For water from wells with strontium-90 concentrations at or above the reporting level, 94% or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that within-laboratory reproducibility for strontium-90 in groundwater at the INEL is not significantly affected by changes in filtration and preservation methods used for sample collections. (USGS)
A novel method for segmentation of Infrared Scanning Laser Ophthalmoscope (IR-SLO) images of retina.
Ajaz, Aqsa; Aliahmad, Behzad; Kumar, Dinesh K
2017-07-01
Retinal vessel segmentation forms an essential element of automatic retinal disease screening systems. The development of multimodal imaging system with IR-SLO and OCT could help in studying the early stages of retinal disease. The advantages of IR-SLO to examine the alterations in the structure of retina and direct correlation with OCT can be useful for assessment of various diseases. This paper presents an automatic method for segmentation of IR-SLO fundus images based on the combination of morphological filters and image enhancement techniques. As a first step, the retinal vessels are contrasted using morphological filters followed by background exclusion using Contrast Limited Adaptive Histogram Equalization (CLAHE) and Bilateral filtering. The final segmentation is obtained by using Isodata technique. Our approach was tested on a set of 26 IR-SLO images and results were compared to two set of gold standard images. The performance of the proposed method was evaluated in terms of sensitivity, specificity and accuracy. The system has an average accuracy of 0.90 for both the sets.
Performance evaluation of a retrofit digital detector-based mammography system.
Marshall, Nicholas W; van Ongeval, Chantal; Bosmans, Hilde
2016-02-01
A retrofit flat panel detector was integrated with a GE DMR+ analog mammography system and characterized using detective quantum efficiency (DQE). Technical system performance was evaluated using the European Guidelines protocol, followed by a limited evaluation of clinical image quality for 20 cases using image quality criteria in the European Guidelines. Optimal anode/filter selections were established using signal difference-to-noise ratio measurements. Only small differences in peak DQE were seen between the three anode/filter settings, with an average value of 0.53. For poly(methyl methacrylate) (PMMA) thicknesses above 60 mm, the Rh/Rh setting was the optimal anode/filter setting. The system required a mean glandular dose of 0.54 mGy at 30 kV Rh/Rh to reach the Acceptable gold thickness limit for 0.1 mm details. Imaging performance of the retrofit unit with the GE DMR+ is notably better than of powder based computed radiography systems and is comparable to current flat panel FFDM systems. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecil, L.D.; Knobel, L.L.; Wegner, S.J.
1989-01-01
Water from four wells completed in the Snake River Plain aquifer was sampled as part of the US Geological Survey's quality assurance program to evaluate the effect of filtration and preservation methods on strontium-90 concentrations in groundwater at the Idaho National Engineering Laboratory. Water from each well was filtered through either a 0.45-micrometer membrane or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered samples was preserved in the field with reagent-grade hydrochloric acid and the other set of samples was not acidified. For water from wells with strontium-90 concentrations atmore » or above the reporting level, 94% or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that within-laboratory reproducibility for strontium-90 in groundwater at the INEL is not significantly affected by changes in filtration and preservation methods used for sample collections. 13 refs., 2 figs., 6 tabs.« less
NASA Astrophysics Data System (ADS)
Beganović, Anel; Beć, Krzysztof B.; Henn, Raphael; Huck, Christian W.
2018-05-01
The applicability of two elimination techniques for interferences occurring in measurements with cells of short pathlength using Fourier transform near-infrared (FT-NIR) spectroscopy was evaluated. Due to the growing interest in the field of vibrational spectroscopy in aqueous biological fluids (e.g. glucose in blood), aqueous solutions of D-(+)-glucose were prepared and split into a calibration set and an independent validation set. All samples were measured with two FT-NIR spectrometers at various spectral resolutions. Moving average smoothing (MAS) and fast Fourier transform filter (FFT filter) were applied to the interference affected FT-NIR spectra in order to eliminate the interference pattern. After data pre-treatment, partial least squares regression (PLSR) models using different NIR regions were constructed using untreated (interference affected) spectra and spectra treated with MAS and FFT filter. The prediction of the independent validation set revealed information about the performance of the utilized interference elimination techniques, as well as the different NIR regions. The results showed that the combination band of water at approx. 5200 cm-1 is of great importance since its performance was superior to the one of the so-called first overtone of water at approx. 6800 cm-1. Furthermore, this work demonstrated that MAS and FFT filter are fast and easy-to-use techniques for the elimination of interference fringes in FT-NIR transmittance spectroscopy.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B.; Tsevetanov, Zlatan; Woodruff, Bob; Mooney, Thomas A.
1998-01-01
Advanced optical bandpass filters for the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) have been developed on a filter-by-filter basis through detailed studies which take into account the instrument's science goals, available optical filter fabrication technology, and developments in ACS's charge-coupled-device (CCD) detector technology. These filters include a subset of filters for the Sloan Digital Sky Survey (SDSS) which are optimized for astronomical photometry using today's charge-coupled-devices (CCD's). In order for ACS to be truly advanced, these filters must push the state-of-the-art in performance in a number of key areas at the same time. Important requirements for these filters include outstanding transmitted wavefront, high transmittance, uniform transmittance across each filter, spectrally structure-free bandpasses, exceptionally high out of band rejection, a high degree of parfocality, and immunity to environmental degradation. These constitute a very stringent set of requirements indeed, especially for filters which are up to 90 mm in diameter. The highly successful paradigm in which final specifications for flight filters were derived through interaction amongst the ACS Science Team, the instrument designer, the lead optical engineer, and the filter designer and vendor is described. Examples of iterative design trade studies carried out in the context of science needs and budgetary and schedule constraints are presented. An overview of the final design specifications for the ACS bandpass and ramp filters is also presented.
Indications, complications and outcomes of inferior vena cava filters: A retrospective study.
Wassef, Andrew; Lim, Wendy; Wu, Cynthia
2017-05-01
Inferior vena cava filters are used to prevent embolization of a lower extremity deep vein thrombosis when the risk of pulmonary embolism is thought to be high. However, evidence is lacking for their benefit and guidelines differ on the recommended indications for filter insertion. The study aim was to determine the reasons for inferior vena cava filter placement and subsequent complication rate. A retrospective cohort of patients receiving inferior vena cava filters in Edmonton, Alberta, Canada from 2007 to 2011. Main outcome was the indication of inferior vena cava filter insertion. Other measures include baseline demographic and medical history of patients, clinical outcomes and filter retrieval rates. 464 patients received inferior vena cava filters. An acute deep vein thrombosis with a contraindication to anticoagulation was the indication for 206 (44.4%) filter insertions. No contraindication to anticoagulation could be identified in 20.7% of filter placements. 30.6% were placed in those with active cancer, in which mortality was significantly higher. Only 38.9% of retrievable filters were successfully retrieved. Inferior vena cava filters were placed frequently in patients with weak or no guideline-supported indications for filter placement and in up to 20% of patients with no contraindication to anticoagulation. The high rates of cancer and the high mortality rate of the cohort raise the possibility that some filters are placed inappropriately in end of life settings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Processing Techniques for Intelligibility Improvement to Speech with Co-Channel Interference.
1983-09-01
processing was found to be always less than in the original unprocessed co-channel sig- nali also as the length of the comb filter increased, the...7 D- i35 702 PROCESSING TECHNIQUES FOR INTELLIGIBILITY IMPRO EMENT 1.TO SPEECH WITH CO-C..(U) SIGNAL TECHNOLOGY INC GOLETACA B A HANSON ET AL SEP...11111111122 11111.25 1111 .4 111.6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU Of STANDARDS- 1963-A RA R.83-225 Set ,’ember 1983 PROCESSING
Data Filtering of Western Hemisphere GOES Wildfire ABBA Products
NASA Astrophysics Data System (ADS)
Theisen, M.; Prins, E.; Schmidt, C.; Reid, J. S.; Hunter, J.; Westphal, D.
2002-05-01
The Fire Locating and Modeling of Burning Emissions (FLAMBE') project was developed to model biomass burning emissions, transport, and radiative effects in real time. The model relies on data from the Geostationary Operational Environment Satellites (GOES-8, GOES-10), that is generated by the Wildfire Automated Biomass Burning Algorithm (WF ABBA). In an attempt to develop the most accurate modeling system the data set needs to be filtered to distinguish the true fire pixels from false alarms. False alarms occur due to reflection of solar radiation off of standing water, surface structure variances, and heat anomalies. The Reoccurring Fire Filtering algorithm (ReFF) was developed to address such false alarms by filtering data dependent on reoccurrence, location in relation to region and satellite, as well as heat intensity. WF ABBA data for the year 2000 during the peak of the burning season were analyzed using ReFF. The analysis resulted in a 45% decrease in North America and only a 15% decrease in South America, respectively, in total fire pixel occurrence. The lower percentage decrease in South America is a result of fires burning for longer periods of time, less surface variance, as well as an increase in heat intensity of fires for that region. Also fires are so prevalent in the region that multiple fires may coexist in the same 4-kilometer pixel.
Lannoy, D; Décaudin, B; Resibois, J-P; Barrier, F; Wierre, L; Horrent, S; Batt, C; Moulront, S; Odou, P
2008-02-01
This work consisted of the assessment of humidification parameters and flow resistance for different heat and moisture exchanger filters (HMEF) used in intensive care unit. Four electrostatic HMEF were assessed: Hygrobac S (Tyco); Humidvent compact S (Teleflex); Hygrovent S/HME (Medisize-Dräger); Clear-Therm+HMEF (Intersurgical). Humidification parameters (loss of water weight, average absolute moisture [AAM], absolute variation of moisture) have been evaluated on a bench-test in conformity with the ISO 9360: 2000 standard, for 24h with the following ventilatory settings: tidal volume at 500 ml, respiratory rate at 15 c/min, and inspiration/expiration ratio at 1:1. The flow resistance of HMEFs assessed using the pressure drop method was measured before and after 24h of humidification for three increasing air flows of 30, 60, and 90 l/min. All the HMEFs allowed satisfactory level of humidification exceeding 30 mgH(2)O/l. The less powerful remained the Clear-Therm. Concerning HMEFs flow resistance, results showed a pressure drop slightly more important for the Hygrobac S filter as compared with other filters. This test showed differences between the HMEFs for both humidification and resistance parameters. When compared to the new version of the standards, HMEFs demonstrated their reliability. However, evolution of humidification and flow resistance characteristics over 24h showed a structural degradation of HMEFs, limiting their use over a longer period.
Brown, Adrian P; Borgs, Christian; Randall, Sean M; Schnell, Rainer
2017-06-08
Integrating medical data using databases from different sources by record linkage is a powerful technique increasingly used in medical research. Under many jurisdictions, unique personal identifiers needed for linking the records are unavailable. Since sensitive attributes, such as names, have to be used instead, privacy regulations usually demand encrypting these identifiers. The corresponding set of techniques for privacy-preserving record linkage (PPRL) has received widespread attention. One recent method is based on Bloom filters. Due to superior resilience against cryptographic attacks, composite Bloom filters (cryptographic long-term keys, CLKs) are considered best practice for privacy in PPRL. Real-world performance of these techniques using large-scale data is unknown up to now. Using a large subset of Australian hospital admission data, we tested the performance of an innovative PPRL technique (CLKs using multibit trees) against a gold-standard derived from clear-text probabilistic record linkage. Linkage time and linkage quality (recall, precision and F-measure) were evaluated. Clear text probabilistic linkage resulted in marginally higher precision and recall than CLKs. PPRL required more computing time but 5 million records could still be de-duplicated within one day. However, the PPRL approach required fine tuning of parameters. We argue that increased privacy of PPRL comes with the price of small losses in precision and recall and a large increase in computational burden and setup time. These costs seem to be acceptable in most applied settings, but they have to be considered in the decision to apply PPRL. Further research on the optimal automatic choice of parameters is needed.
Kang, Jieun; Ko, Heung-Kyu; Shin, Ji Hoon; Ko, Gi-Young; Jo, Kyung-Wook; Huh, Jin Won; Oh, Yeon-Mok; Lee, Sang-Do; Lee, Jae Seung
2017-12-01
Retrievable inferior vena cava (IVC) filters are increasingly used in patients with venous thromboembolism (VTE) who have contraindications to anticoagulant therapy. However, previous studies have shown that many retrievable filters are left permanently in patients. This study aimed to identify the common indications for IVC filter insertion, the filter retrieval rate, and the predictive factors for filter retrieval attempts. To this end, a retrospective cohort study was performed at a tertiary care center in South Korea between January 2010 and May 2016. Electronic medical charts were reviewed for patients with pulmonary embolism (PE) who underwent IVC filter insertion. A total of 439 cases were reviewed. The most common indication for filter insertion was a preoperative/procedural aim, followed by extensive iliofemoral deep vein thrombosis (DVT). Retrieval of the IVC filter was attempted in 44.9% of patients. The retrieval success rate was 93.9%. History of cerebral hemorrhage, malignancy, and admission to a nonsurgical department were the significant predictive factors of a lower retrieval attempt rate in multivariate analysis. With the increased use of IVC filters, more issues should be addressed before placing a filter and physicians should attempt to improve the filter retrieval rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalski, Adam F.; Mathioudakis, Mihalis; Hawley, Suzanne L.
We present a large data set of high-cadence dMe flare light curves obtained with custom continuum filters on the triple-beam, high-speed camera system ULTRACAM. The measurements provide constraints for models of the near-ultraviolet (NUV) and optical continuum spectral evolution on timescales of ≈1 s. We provide a robust interpretation of the flare emission in the ULTRACAM filters using simultaneously obtained low-resolution spectra during two moderate-sized flares in the dM4.5e star YZ CMi. By avoiding the spectral complexity within the broadband Johnson filters, the ULTRACAM filters are shown to characterize bona fide continuum emission in the NUV, blue, and red wavelength regimes. Themore » NUV/blue flux ratio in flares is equivalent to a Balmer jump ratio, and the blue/red flux ratio provides an estimate for the color temperature of the optical continuum emission. We present a new “color–color” relationship for these continuum flux ratios at the peaks of the flares. Using the RADYN and RH codes, we interpret the ULTRACAM filter emission using the dominant emission processes from a radiative-hydrodynamic flare model with a high nonthermal electron beam flux, which explains a hot, T ≈ 10{sup 4} K, color temperature at blue-to-red optical wavelengths and a small Balmer jump ratio as observed in moderate-sized and large flares alike. We also discuss the high time resolution, high signal-to-noise continuum color variations observed in YZ CMi during a giant flare, which increased the NUV flux from this star by over a factor of 100.« less
Influence of cigarette filter ventilation on smokers' mouth level exposure to tar and nicotine.
Caraway, John W; Ashley, Madeleine; Bowman, Sheri A; Chen, Peter; Errington, Graham; Prasad, Krishna; Nelson, Paul R; Shepperd, Christopher J; Fearon, Ian M
2017-12-01
Cigarette filter ventilation allows air to be drawn into the filter, diluting the cigarette smoke. Although machine smoking reveals that toxicant yields are reduced, it does not predict human yields. The objective of this study was to investigate the relationship between cigarette filter ventilation and mouth level exposure (MLE) to tar and nicotine in cigarette smokers. We collated and reviewed data from 11 studies across 9 countries, in studies performed between 2005 and 2013 which contained data on MLE from 156 products with filter ventilation between 0% and 87%. MLE among 7534 participants to tar and nicotine was estimated using the part-filter analysis method from spent filter tips. For each of the countries, MLE to tar and nicotine tended to decrease as filter ventilation increased. Across countries, per-cigarette MLE to tar and nicotine decreased as filter ventilation increased from 0% to 87%. Daily MLE to tar and nicotine also decreased across the range of increasing filter ventilation. These data suggest that on average smokers of highly ventilated cigarettes are exposed to lower amounts of nicotine and tar per cigarette and per day than smokers of cigarettes with lower levels of ventilation. Copyright © 2017 British American Tobacco. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mckeown, Anderson B; Hibbard, Robert R
1955-01-01
The effect of dissolved oxygen in the filter-clogging characteristics of three JP-4 and two JP-5 fuels was studied at 300 degrees to 400 degrees F in a bench- scale rig, employing filter paper as the filter medium. The residence time of the fuel at the high temperature was approximately 6 seconds. For these conditions, the clogging characteristics of the fuels increased with both increasing temperature and increasing concentration of dissolved oxygen. The amount of insoluble material formed at high temperatures necessary to produce clogging of filters was very small, of the order of 1 milligram per gallon of fuel.
Spatio-temporal filtering techniques for the detection of disaster-related communication.
Fitzhugh, Sean M; Ben Gibson, C; Spiro, Emma S; Butts, Carter T
2016-09-01
Individuals predominantly exchange information with one another through informal, interpersonal channels. During disasters and other disrupted settings, information spread through informal channels regularly outpaces official information provided by public officials and the press. Social scientists have long examined this kind of informal communication in the rumoring literature, but studying rumoring in disrupted settings has posed numerous methodological challenges. Measuring features of informal communication-timing, content, location-with any degree of precision has historically been extremely challenging in small studies and infeasible at large scales. We address this challenge by using online, informal communication from a popular microblogging website and for which we have precise spatial and temporal metadata. While the online environment provides a new means for observing rumoring, the abundance of data poses challenges for parsing hazard-related rumoring from countless other topics in numerous streams of communication. Rumoring about disaster events is typically temporally and spatially constrained to places where that event is salient. Accordingly, we use spatio and temporal subsampling to increase the resolution of our detection techniques. By filtering out data from known sources of error (per rumor theories), we greatly enhance the signal of disaster-related rumoring activity. We use these spatio-temporal filtering techniques to detect rumoring during a variety of disaster events, from high-casualty events in major population centers to minimally destructive events in remote areas. We consistently find three phases of response: anticipatory excitation where warnings and alerts are issued ahead of an event, primary excitation in and around the impacted area, and secondary excitation which frequently brings a convergence of attention from distant locales onto locations impacted by the event. Our results demonstrate the promise of spatio-temporal filtering techniques for "tuning" measurement of hazard-related rumoring to enable observation of rumoring at scales that have long been infeasible. Copyright © 2016 Elsevier Inc. All rights reserved.
Impact of subgrid fluid turbulence on inertial particles subject to gravity
NASA Astrophysics Data System (ADS)
Rosa, Bogdan; Pozorski, Jacek
2017-07-01
Two-phase turbulent flows with the dispersed phase in the form of small, spherical particles are increasingly often computed with the large-eddy simulation (LES) of the carrier fluid phase, coupled to the Lagrangian tracking of particles. To enable further model development for LES with inertial particles subject to gravity, we consider direct numerical simulations of homogeneous isotropic turbulence with a large-scale forcing. Simulation results, both without filtering and in the a priori LES setting, are reported and discussed. A full (i.e. a posteriori) LES is also performed with the spectral eddy viscosity. Effects of gravity on the dispersed phase include changes in the average settling velocity due to preferential sweeping, impact on the radial distribution function and radial relative velocity, as well as direction-dependent modification of the particle velocity variance. The filtering of the fluid velocity, performed in spectral space, is shown to have a non-trivial impact on these quantities.
NASA Astrophysics Data System (ADS)
Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.
2005-03-01
A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.
Solar Confocal Interferometers for Sub-Picometer-Resolution Spectral Filters
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Pietraszewski, Chris; West, Edward A.; Dines, Terence C.
2006-01-01
The confocal Fabry-Perot interferometer allows sub-picometer spectral resolution of Fraunhofer line profiles. Such high spectral resolution is needed to keep pace with the higher spatial resolution of the new set of large-aperture solar telescopes. The line-of-sight spatial resolution derived for line profile inversions would then track the improvements of the transverse spatial scale provided by the larger apertures. The confocal interferometer's unique properties allow a simultaneous increase in both etendue and spectral power. Methods: We have constructed and tested two confocal interferometers. Conclusions: In this paper we compare the confocal interferometer with other spectral imaging filters, provide initial design parameters, show construction details for two designs, and report on the laboratory test results for these interferometers, and propose a multiple etalon system for future testing of these units and to obtain sub-picometer spectral resolution information on the photosphere in both the visible and near-infrared.
Working with difference: Thematic concepts of Japanese nurses working in New Zealand.
Healee, David; Inada, Kumiko
2016-03-01
The purpose of this study was to compare the differences experienced by Japanese nurses working in New Zealand from an organizational and personal perspective, using a qualitative approach. Interview data was analyzed using a thematic method to abstract increasing levels of themes until one main theme explained the data: finding a voice. This core theme demonstrated that Japanese nurses had to learn to accommodate difference while learning to speak up. Moreover, this needed to occur through a number of cultural filters. The principal conclusion was that migrant nurses face multiple personal and organizational challenges when working in a new environment. Finding a voice is the method in which nurses learn to communicate and work within new healthcare settings. Nurses use a number of filters to manage the transition. The host country needs to recognize these differences and accommodate them through orientation modules. © 2015 Wiley Publishing Asia Pty Ltd.
Conductometric Sensor for Soot Mass Flow Detection in Exhausts of Internal Combustion Engines
Feulner, Markus; Hagen, Gunter; Müller, Andreas; Schott, Andreas; Zöllner, Christian; Brüggemann, Dieter; Moos, Ralf
2015-01-01
Soot sensors are required for on-board diagnostics (OBD) of automotive diesel particulate filters (DPF) to detect filter failures. Widely used for this purpose are conductometric sensors, measuring an electrical current or resistance between two electrodes. Soot particles deposit on the electrodes, which leads to an increase in current or decrease in resistance. If installed upstream of a DPF, the “engine-out” soot emissions can also be determined directly by soot sensors. Sensors were characterized in diesel engine real exhausts under varying operation conditions and with two different kinds of diesel fuel. The sensor signal was correlated to the actual soot mass and particle number, measured with an SMPS. Sensor data and soot analytics (SMPS) agreed very well, an impressing linear correlation in a double logarithmic representation was found. This behavior was even independent of the used engine settings or of the biodiesel content. PMID:26580621
Conductometric Sensor for Soot Mass Flow Detection in Exhausts of Internal Combustion Engines.
Feulner, Markus; Hagen, Gunter; Müller, Andreas; Schott, Andreas; Zöllner, Christian; Brüggemann, Dieter; Moos, Ralf
2015-11-13
Soot sensors are required for on-board diagnostics (OBD) of automotive diesel particulate filters (DPF) to detect filter failures. Widely used for this purpose are conductometric sensors, measuring an electrical current or resistance between two electrodes. Soot particles deposit on the electrodes, which leads to an increase in current or decrease in resistance. If installed upstream of a DPF, the "engine-out" soot emissions can also be determined directly by soot sensors. Sensors were characterized in diesel engine real exhausts under varying operation conditions and with two different kinds of diesel fuel. The sensor signal was correlated to the actual soot mass and particle number, measured with an SMPS. Sensor data and soot analytics (SMPS) agreed very well, an impressing linear correlation in a double logarithmic representation was found. This behavior was even independent of the used engine settings or of the biodiesel content.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Improving the retrieval rate of inferior vena cava filters with a multidisciplinary team approach
Inagaki, Elica; Farber, Alik; Eslami, Mohammad H.; Siracuse, Jeffrey J.; Rybin, Denis V.; Sarosiek, Shayna; Sloan, J. Mark; Kalish, Jeffrey
2017-01-01
Objective The option to retrieve inferior vena cava (IVC) filters has resulted in an increase in the utilization of these devices as stopgap measures in patients with relative contraindications to anticoagulation. These retrievable IVC filters, however, are often not retrieved and become permanent. Recent data from our institution confirmed a historically low retrieval rate. Therefore, we hypothesized that the implementation of a new IVC filter retrieval protocol would increase the retrieval rate of appropriate IVC filters at our institution. Methods All consecutive patients who underwent an IVC filter placement at our institution between September 2003 and July 2012 were retrospectively reviewed. In August 2012, a multidisciplinary task force was established, and a new IVC filter retrieval protocol was implemented. Prospective data were collected using a centralized interdepartmental IVC filter registry for all consecutive patients who underwent an IVC filter placement between August 2012 and September 2014. Patients were chronologically categorized into preimplementation (PRE) and postimplementation (POST) groups. Comparisons of outcome measures, including the retrieval rate of IVC filters along with rates of retrieval attempt and technical failure, were made between the two groups. Results In the PRE and POST groups, a total of 720 and 74 retrievable IVC filters were implanted, respectively. In the POST group, 40 of 74 filters (54%) were successfully retrieved compared with 82 of 720 filters (11%) in the PRE group (P < .001). Furthermore, a greater number of IVC filter retrievals were attempted in the POST group than in the PRE group (66% vs 14%; P < .001). No significant difference was observed between the PRE and POST groups for technical failure (17% vs 18%; P = .9). Conclusions The retrieval rate of retrievable IVC filters at our institution was significantly increased with the implementation of a new IVC filter retrieval protocol with a multidisciplinary team approach. This improved retrieval rate is possible with minimal dedication of resources and can potentially lead to a decrease in IVC filter-related complications in the future. PMID:27318045
The development of PubMed search strategies for patient preferences for treatment outcomes.
van Hoorn, Ralph; Kievit, Wietske; Booth, Andrew; Mozygemba, Kati; Lysdahl, Kristin Bakke; Refolo, Pietro; Sacchini, Dario; Gerhardus, Ansgar; van der Wilt, Gert Jan; Tummers, Marcia
2016-07-29
The importance of respecting patients' preferences when making treatment decisions is increasingly recognized. Efficiently retrieving papers from the scientific literature reporting on the presence and nature of such preferences can help to achieve this goal. The objective of this study was to create a search filter for PubMed to help retrieve evidence on patient preferences for treatment outcomes. A total of 27 journals were hand-searched for articles on patient preferences for treatment outcomes published in 2011. Selected articles served as a reference set. To develop optimal search strategies to retrieve this set, all articles in the reference set were randomly split into a development and a validation set. MeSH-terms and keywords retrieved using PubReMiner were tested individually and as combinations in PubMed and evaluated for retrieval performance (e.g. sensitivity (Se) and specificity (Sp)). Of 8238 articles, 22 were considered to report empirical evidence on patient preferences for specific treatment outcomes. The best search filters reached Se of 100 % [95 % CI 100-100] with Sp of 95 % [94-95 %] and Sp of 97 % [97-98 %] with 75 % Se [74-76 %]. In the validation set these queries reached values of Se of 90 % [89-91 %] with Sp 94 % [93-95 %] and Se of 80 % [79-81 %] with Sp of 97 % [96-96 %], respectively. Narrow and broad search queries were developed which can help in retrieving literature on patient preferences for treatment outcomes. Identifying such evidence may in turn enhance the incorporation of patient preferences in clinical decision making and health technology assessment.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Compact whole-body fluorescent imaging of nude mice bearing EGFP expressing tumor
NASA Astrophysics Data System (ADS)
Chen, Yanping; Xiong, Tao; Chu, Jun; Yu, Li; Zeng, Shaoqun; Luo, Qingming
2005-01-01
Issue of tumor has been a hotspot of current medicine. It is important for tumor research to detect tumors bearing in animal models easily, fast, repetitively and noninvasivly. Many researchers have paid their increasing interests on the detecting. Some contrast agents, such as green fluorescent protein (GFP) and Discosoma red fluorescent protein (Dsred) were applied to enhance image quality. Three main kinds of imaging scheme were adopted to visualize fluorescent protein expressing tumors in vivo. These schemes based on fluorescence stereo microscope, cooled charge-coupled-device (CCD) or camera as imaging set, and laser or mercury lamp as excitation light source. Fluorescence stereo microscope, laser and cooled CCD are expensive to many institutes. The authors set up an inexpensive compact whole-body fluorescent imaging tool, which consisted of a Kodak digital camera (model DC290), fluorescence filters(B and G2;HB Optical, Shenyang, Liaoning, P.R. China) and a mercury 50-W lamp power supply (U-LH50HG;Olympus Optical, Japan) as excitation light source. The EGFP was excited directly by mercury lamp with D455/70 nm band-pass filter and fluorescence was recorded by digital camera with 520nm long-pass filter. By this easy operation tool, the authors imaged, in real time, fluorescent tumors growing in live mice. The imaging system is external and noninvasive. For half a year our experiments suggested the imaging scheme was feasible. Whole-body fluorescence optical imaging for fluorescent expressing tumors in nude mouse is an ideal tool for antitumor, antimetastatic, and antiangiogenesis drug screening.
Borrego, J J; Cornax, R; Preston, D R; Farrah, S R; McElhaney, B; Bitton, G
1991-01-01
Electronegative and electropositive filters were compared for the recovery of indigenous bacteriophages from water samples, using the VIRADEL technique. Fiber glass and diatomaceous earth filters displayed low adsorption and recovery, but an important increase of the adsorption percentage was observed when the filters were treated with cationic polymers (about 99% adsorption). A new methodology of virus elution was developed in this study, consisting of the slow passage of the eluent through the filter, thus increasing the contact time between eluent and virus adsorbed on the filters. The use of this technique allows a maximum recovery of 71.2% compared with 46.7% phage recovery obtained by the standard elution procedure. High percentages (over 83%) of phage adsorption were obtained with different filters from 1-liter aliquots of the samples, except for Virosorb 1-MDS filters (between 1.6 and 32% phage adsorption). Phage recovery by using the slow passing of the eluent depended on the filter type, with recovery ranging between 1.6% for Virosorb 1-MDS filters treated with polyethyleneimine and 103.2% for diatomaceous earth filters treated with 0.1% Nalco. PMID:2059044
Joe, Yun Haeng; Woo, Kyoungja; Hwang, Jungho
2014-09-15
In this study, SiO2 nanoparticles surface coated with Ag nanoparticles (SA particles) were fabricated to coat a medium air filter. The pressure drop, filtration efficiency, and anti-viral ability of the filter were evaluated against aerosolized bacteriophage MS2 in a continuous air flow condition. A mathematical approach was developed to measure the anti-viral ability of the filter with various virus deposition times. Moreover, two quality factors based on the anti-viral ability of the filter, and a traditional quality factor based on filtration efficiency, were calculated. The filtration efficiency and pressure drop increased with decreasing media velocity and with increasing SA particle coating level. The anti-viral efficiency also increased with increasing SA particle coating level, and decreased by with increasing virus deposition time. Consequently, SA particle coating on a filter does not have significant effects on filtration quality, and there is an optimal coating level to produce the highest anti-viral quality. Copyright © 2014 Elsevier B.V. All rights reserved.
Fusing metabolomics data sets with heterogeneous measurement errors
Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.
2018-01-01
Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490
Frequency modulation television analysis: Threshold impulse analysis. [with computer program
NASA Technical Reports Server (NTRS)
Hodge, W. H.
1973-01-01
A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.
Herbst, Daniel P.
2017-01-01
Abstract: Conventional arterial-line filters commonly use a large volume circular shaped housing, a wetted micropore screen, and a purge port to trap, separate, and remove gas bubbles from extracorporeal blood flow. Focusing on the bubble trapping function, this work attempts to explore how the filter housing shape and its resulting blood flow path affect the clinical application of arterial-line filters in terms of gross air handling. A video camera was used in a wet-lab setting to record observations made during gross air-bolus injections in three different radially designed filters using a 30–70% glycerol–saline mixture flowing at 4.5 L/min. Two of the filters both had inlet ports attached near the filter-housing top with bottom oriented outlet ports at the bottom, whereas the third filter had its inlet and outlet ports both located at the bottom of the filter housing. The two filters with top-in bottom-out fluid paths were shown to direct the incoming flow downward as it passed through the filter, placing the forces of buoyancy and viscous drag in opposition to each other. This contrasted with the third filter's bottom-in bottom-out fluid path, which was shown to direct the incoming flow upward so that the forces of buoyancy and viscous drag work together. The direction of the blood flow path through a filter may be important to the application of arterial-line filter technology as it helps determine how the forces of buoyancy and flow are aligned with one another. PMID:28298665
Herbst, Daniel P
2017-03-01
Conventional arterial-line filters commonly use a large volume circular shaped housing, a wetted micropore screen, and a purge port to trap, separate, and remove gas bubbles from extracorporeal blood flow. Focusing on the bubble trapping function, this work attempts to explore how the filter housing shape and its resulting blood flow path affect the clinical application of arterial-line filters in terms of gross air handling. A video camera was used in a wet-lab setting to record observations made during gross air-bolus injections in three different radially designed filters using a 30-70% glycerol-saline mixture flowing at 4.5 L/min. Two of the filters both had inlet ports attached near the filter-housing top with bottom oriented outlet ports at the bottom, whereas the third filter had its inlet and outlet ports both located at the bottom of the filter housing. The two filters with top-in bottom-out fluid paths were shown to direct the incoming flow downward as it passed through the filter, placing the forces of buoyancy and viscous drag in opposition to each other. This contrasted with the third filter's bottom-in bottom-out fluid path, which was shown to direct the incoming flow upward so that the forces of buoyancy and viscous drag work together. The direction of the blood flow path through a filter may be important to the application of arterial-line filter technology as it helps determine how the forces of buoyancy and flow are aligned with one another.
System for measuring radioactivity of labelled biopolymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, V.
1980-07-08
A system is described for measuring radioactivity of labelled biopolymers, comprising: a set of containers adapted for receiving aqueous solutions of biological samples containing biopolymers which are subsequently precipitated in said containers on particles of diatomite in the presence of a coprecipitator, then filtered, dissolved, and mixed with a scintillator; radioactivity measuring means including a detection chamber to which is fed the mixture produced in said set of containers; an electric drive for moving said set of containers in a stepwise manner; means for proportional feeding of said coprecipitator and a suspension of diatomite in an acid solution to saidmore » containers which contain the biological sample for forming an acid precipitation of biopolymers; means for the removal of precipitated samples from said containers; precipitated biopolymer filtering means for successively filtering the precipitate, suspending the precipitate, dissolving the biopolymers mixed with said scintillator for feeding of the mixture to said detection chamber; a system of pipelines interconnecting said above-recited means; and said means for measuring radioactivity of labelled biopolymers including, a measuring cell arranged in a detection chamber and communicating with said means for filtering precipitated biopolymers through one pipeline of said system of pipelines; a program unit electrically connected to said electric drive, said means for acid precipatation of biopolymers, said means for the removal of precipitated samples from said containers, said filtering means, and said radioactivity measuring device; said program unit adapted to periodically switch on and off the above-recited means and check the sequence of the radioactivity measuring operations; and a control unit for controlling the initiation of the system and for selecting programs.« less
Analytically solvable chaotic oscillator based on a first-order filter.
Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N
2016-02-01
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.
Analytically solvable chaotic oscillator based on a first-order filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-02-15
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy
NASA Astrophysics Data System (ADS)
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-01
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-06
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Optical Fourier filtering for whole lens assessment of progressive power lenses.
Spiers, T; Hull, C C
2000-07-01
Four binary filter designs for use in an optical Fourier filtering set-up were evaluated when taking quantitative measurements and when qualitatively mapping the power variation of progressive power lenses (PPLs). The binary filters tested were concentric ring, linear grating, grid and "chevron" designs. The chevron filter was considered best for quantitative measurements since it permitted a vernier acuity task to be used for measuring the fringe spacing, significantly reducing errors, and it also gave information on the polarity of the lens power. The linear grating filter was considered best for qualitatively evaluating the power variation. Optical Fourier filtering and a Nidek automatic focimeter were then used to measure the powers in the distance and near portions of five PPLs of differing design. Mean measurement error was 0.04 D with a maximum value of 0.13 D. Good qualitative agreement was found between the iso-cylinder plots provided by the manufacturer and the Fourier filter fringe patterns for the PPLs indicating that optical Fourier filtering provides the ability to map the power distribution across the entire lens aperture without the need for multiple point measurements. Arguments are presented that demonstrate that it should be possible to derive both iso-sphere and iso-cylinder plots from the binary filter patterns.
Information Filtering via Clustering Coefficients of User-Object Bipartite Networks
NASA Astrophysics Data System (ADS)
Guo, Qiang; Leng, Rui; Shi, Kerui; Liu, Jian-Guo
The clustering coefficient of user-object bipartite networks is presented to evaluate the overlap percentage of neighbors rating lists, which could be used to measure interest correlations among neighbor sets. The collaborative filtering (CF) information filtering algorithm evaluates a given user's interests in terms of his/her friends' opinions, which has become one of the most successful technologies for recommender systems. In this paper, different from the object clustering coefficient, users' clustering coefficients of user-object bipartite networks are introduced to improve the user similarity measurement. Numerical results for MovieLens and Netflix data sets show that users' clustering effects could enhance the algorithm performance. For MovieLens data set, the algorithmic accuracy, measured by the average ranking score, can be improved by 12.0% and the diversity could be improved by 18.2% and reach 0.649 when the recommendation list equals to 50. For Netflix data set, the accuracy could be improved by 14.5% at the optimal case and the popularity could be reduced by 13.4% comparing with the standard CF algorithm. Finally, we investigate the sparsity effect on the performance. This work indicates the user clustering coefficients is an effective factor to measure the user similarity, meanwhile statistical properties of user-object bipartite networks should be investigated to estimate users' tastes.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Wide-Stopband Aperiodic Phononic Filters
NASA Technical Reports Server (NTRS)
Rostem, Karwan; Chuss, David; Denis, K. L.; Wollack, E. J.
2016-01-01
We demonstrate that a phonon stopband can be synthesized from an aperiodic structure comprising a discrete set of phononic filter stages. Each element of the set has a dispersion relation that defines a complete bandgap when calculated under a Bloch boundary condition. Hence, the effective stopband width in an aperiodic phononic filter (PnF) may readily exceed that of a phononic crystal with a single lattice constant or a coherence scale. With simulations of multi-moded phononic waveguides, we discuss the effects of finite geometry and mode-converting junctions on the phonon transmission in PnFs. The principles described may be utilized to form a wide stopband in acoustic and surface wave media. Relative to the quantum of thermal conductance for a uniform mesoscopic beam, a PnF with a stopband covering 1.6-10.4 GHz is estimated to reduce the thermal conductance by an order of magnitude at 75 mK.
Ichien, K; Sawada, A; Yamamoto, T; Kitazawa, Y; Shiraki, R; Yoh, M
1999-04-01
Based on our previous report that showed enhanced transfer of mitomycin C to the sclera and the conjunctiva by dissolving the antiproliferative in a reversible thermo-setting gel, we conducted a study to investigate the efficacy of the mitomycin C-gel in the rabbit. We subconjunctivally injected 0.1 ml of the mitomycin C-gel solution containing several amounts of the drug. Trephination was performed in the injected region 24 hours later. Intraocular pressure measurement, and photography and ultrasound biomicroscopic examination of the filtering bleb were done 1, 2, and 4 weeks postoperatively. The gel containing 3.0 micrograms or more mitomycin C significantly enhanced bleb formation in addition to reducing the intraocular pressure. The reversible thermo-setting gel seems to facilitate filtration following glaucoma filtering surgery in the rabbit and deserves further investigation as a new method of mitomycin C application.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
Activated Biological Filters (ABF Towers). Instructor's Guide. Biological Treatment Process Control.
ERIC Educational Resources Information Center
Wooley, John F.
This instructor's manual contains materials needed to teach a two-lesson unit on activated bio-filters (ABF). These materials include: (1) an overview of the two lessons; (2) lesson plans; (3) lecture outlines (keyed to a set of slides designed for use with the lessons); (4) overhead transparency masters; (5) worksheets for each lesson (with…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-04
... end-user defined filters (the ``Service''). Spread Crawler, which was developed by MEB, listens to a... based on a registered end-user input (i.e. custom-set parameters for particular symbols or industry...-user(s), if any, would be interested in seeing this order. These filtering rules are contained in a...
Reichow, Alan W; Citek, Karl; Edlich, Richard F
2006-01-01
The danger of exposure to ultraviolet (UV) radiation in both the natural environment and artificial occupational settings has long been recognized by national and international standards committees and worker safety agencies. There is an increasing body of literature that suggests that protection from UV exposure is not enough. Unprotected exposure to the short wavelengths of the visible spectrum, termed the "blue light hazard", is gaining acceptance as a true risk to long-term visual health. Global standards and experts in the field are now warning that those individuals who spend considerable time outdoors should seek sun filter eyewear with high impact resistant lenses that provide 100% UV filtration, high levels of blue light filtration, and full visual field lens/frame coverage as provided by high wrap eyewear. The Skin Cancer Foundation has endorsed certain sunglasses as "product[s]...effective [as] UV filter[s] for the eyes and surrounding skin". However, such endorsement does not necessarily mean that the eyewear meets all the protective needs for outdoor use. There are several brands that offer products with such protective characteristics. Performance sun eyewear by Nike Vision, available in both corrective and plano (nonprescription) forms, is one such brand incorporating these protective features.
ERP Estimation using a Kalman Filter in VLBI
NASA Astrophysics Data System (ADS)
Karbon, M.; Soja, B.; Nilsson, T.; Heinkelmann, R.; Liu, L.; Lu, C.; Mora-Diaz, J. A.; Raposo-Pulido, V.; Xu, M.; Schuh, H.
2014-12-01
Geodetic Very Long Baseline Interferometry (VLBI) is one of the primary space geodetic techniques, providing the full set of Earth Orientation Parameters (EOP), and it is unique for observing long term Universal Time (UT1). For applications such as satellite-based navigation and positioning, accurate and continuous ERP obtained in near real-time are essential. They also allow the precise tracking of interplanetary spacecraft. One of the goals of VGOS (VLBI Global Observing System) is to provide such near real-time ERP. With the launch of this next generation VLBI system, the International VLBI Service for Geodesy and Astrometry (IVS) increased its efforts not only to reach 1 mm accuracy on a global scale but also to reduce the time span between the collection of VLBI observations and the availability of the final results substantially. Project VLBI-ART contributes to these objectives by implementing an elaborate Kalman filter, which represents a perfect tool for analyzing VLBI data in quasi real-time. The goal is to implement it in the GFZ version of the Vienna VLBI Software (VieVS) as a completely automated tool, i.e., with no need for human interaction. Here we present the methodology and first results of Kalman filtered EOP from VLBI data.
On the use of through-fall exclusion experiments to filter model hypotheses.
NASA Astrophysics Data System (ADS)
Fisher, R.
2015-12-01
One key threat to the continued existence of large tropical forest carbon reservoirs is the increasing severity of drought across Amazonian forests, observed both in climate model predictions, in recent extreme drought events and in the more chronic lengthening of the dry season of South Eastern Amazonia. Model comprehension of these systems is in it's infancy, particularly with regard to the sensitivities of model output to the representation of hydraulic strategies in tropical forest systems. Here we use data from the ongoing 14 year old Caxiuana through-fall exclusion experiment, in Eastern Brazil, to filter a set of representations of the costs and benefits of alternative hydraulic strategies. In representations where there is a high resource cost to hydraulic resilience, the trait filtering CLM4.5(ED) model selects vegetation types that are sensitive to drought. Conversely, where drought tolerance is inexpensive, a more robust ecosystem emerges from the vegetation dynamic prediction. Thus, there is an impact of trait trade-off relationships on rainforest drought tolerance. It is possible to constrain the more realistic scenarios using outputs from the drought experiments. Better prediction would likely result from a more comprehensive understanding of the costs and benefits of alternative plant strategies.
Observation of IPL spectra using detector system incorporating broadband optical filters
NASA Astrophysics Data System (ADS)
Clarkson, D. McG.
2007-07-01
Systems using intense pulsed light are being increasingly used in therapy applications where issues related to safety of devices and also of performance are becoming more urgent to address. Mechanisms to address this include a suitable standards framework and also the development and application of appropriate measurement techniques. An approach of using conventional bandpass optical filters and silicon photodetectors has been implemented using an analogue USB data capture interfaces linked to a laptop PC. An initial system with 8 concurrent channels has been upgraded to a separate system sampling up to 16 analogue channels. Sampling takes place at the maximum hardware conversion rate of the USB device. Observations have been made of a range of intense pulsed light systems, including a Lumenis One unit with a range of discrete filters. The system has been of value in determining the basic parameters of output pulse profile and spectral composition. This has in turn been related to aspects of standards development for both device manufacture and allocation of appropriate safety eyewear. Initial assessments of a subset of intense pulsed light systems indicate significant complexities in terms, for example, of variation in spectral content as a function of device output setting.
Variable flexure-based fluid filter
Brown, Steve B.; Colston, Jr., Billy W.; Marshall, Graham; Wolcott, Duane
2007-03-13
An apparatus and method for filtering particles from a fluid comprises a fluid inlet, a fluid outlet, a variable size passage between the fluid inlet and the fluid outlet, and means for adjusting the size of the variable size passage for filtering the particles from the fluid. An inlet fluid flow stream is introduced to a fixture with a variable size passage. The size of the variable size passage is set so that the fluid passes through the variable size passage but the particles do not pass through the variable size passage.
Numerical study on self-cleaning canister filter with modified filter cap
NASA Astrophysics Data System (ADS)
Mohammed, Akmal Nizam; Zolkhaely, Mohd Hafiz; Sahrudin, Mohd Sahrizan; Razali, Mohd Azahari; Sapit, Azwan; Hushim, Mohd Faisal
2017-04-01
Air filtration system plays an important role in getting good quality air into turbo machinery such as gas turbine. The filtration system and filters improve the quality of air and protect the gas turbine parts from contaminants which could bring damage. This paper is focused on the configuration of the self-cleaning canister filter in order to obtain the minimal pressure drop along the filter. The configuration includes a modified canister filter cap that is based on the basic geometry that conforms to industry standard. This paper describes the use of CFD to simulate and analyze the flow through the filter. This tool is also used to monitor variables such as pressure and velocity along the filter and to visualize them in the form of contours, vectors and streamlines. In this study, the main parameter varied is the inlet velocity set in the boundary condition during simulations, which are 0.032, 0.063, 0.094 and 0.126 m/s respectively. The data obtained from simulations are then validated with reference data sourced from the industry, and comparisons have subsequently been made for these two filters. As a result, the improvement of the pressure drop for the modified filter is found to be 11.47% to 14.82% compared to the basic filter at the inlet velocity from 0.032 to 0.126 m/s. the total pressure drop produced is 292.3 Pa by the basic filter and 251.11 Pa for modified filter. The pressure drop reduction is 41.19 Pa, which is 14.1% from the basic filter.
Miller, Arthur L; Drake, Pamela L; Murphy, Nathaniel C; Cauda, Emanuele G; LeBouf, Ryan F; Markevicius, Gediminas
Miners are exposed to silica-bearing dust which can lead to silicosis, a potentially fatal lung disease. Currently, airborne silica is measured by collecting filter samples and sending them to a laboratory for analysis. Since this may take weeks, a field method is needed to inform decisions aimed at reducing exposures. This study investigates a field-portable Fourier transform infrared (FTIR) method for end-of-shift (EOS) measurement of silica on filter samples. Since the method entails localized analyses, spatial uniformity of dust deposition can affect accuracy and repeatability. The study, therefore, assesses the influence of radial deposition uniformity on the accuracy of the method. Using laboratory-generated Minusil and coal dusts and three different types of sampling systems, multiple sets of filter samples were prepared. All samples were collected in pairs to create parallel sets for training and validation. Silica was measured by FTIR at nine locations across the face of each filter and the data analyzed using a multiple regression analysis technique that compared various models for predicting silica mass on the filters using different numbers of "analysis shots." It was shown that deposition uniformity is independent of particle type (kaolin vs. silica), which suggests the role of aerodynamic separation is negligible. Results also reflected the correlation between the location and number of shots versus the predictive accuracy of the models. The coefficient of variation (CV) for the models when predicting mass of validation samples was 4%-51% depending on the number of points analyzed and the type of sampler used, which affected the uniformity of radial deposition on the filters. It was shown that using a single shot at the center of the filter yielded predictivity adequate for a field method, (93% return, CV approximately 15%) for samples collected with 3-piece cassettes.
Modeling Flow Past a Tilted Vena Cava Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, M A; Wang, S L
Inferior vena cava filters are medical devices used to prevent pulmonary embolism (PE) from deep vein thrombosis. In particular, retrievable filters are well-suited for patients who are unresponsive to anticoagulation therapy and whose risk of PE decreased with time. The goal of this work is to use computational fluid dynamics to evaluate the flow past an unoccluded and partially occluded Celect inferior vena cava filter. In particular, the hemodynamic response to thrombus volume and filter tilt is examined, and the results are compared with flow conditions that are known to be thrombogenic. A computer model of the filter inside amore » model vena cava is constructed using high resolution digital photographs and methods of computer aided design. The models are parameterized using the Overture software framework, and a collection of overlapping grids is constructed to discretize the flow domain. The incompressible Navier-Stokes equations are solved, and the characteristics of the flow (i.e., velocity contours and wall shear stresses) are computed. The volume of stagnant and recirculating flow increases with thrombus volume. In addition, as the filter increases tilt, the cava wall adjacent to the tilted filter is subjected to low velocity flow that gives rise to regions of low wall shear stress. The results demonstrate the ease of IVC filter modeling with the Overture software framework. Flow conditions caused by the tilted Celect filter may elevate the risk of intrafilter thrombosis and facilitate vascular remodeling. This latter condition also increases the risk of penetration and potential incorporation of the hook of the filter into the vena caval wall, thereby complicating filter retrieval. Consequently, severe tilt at the time of filter deployment may warrant early clinical intervention.« less
NASA Astrophysics Data System (ADS)
Vandermoere, Stany; De Neve, Stefaan
2016-04-01
Flanders (Belgium) is confronted with reactive phosphorus concentrations in streams and lakes which are three to four times higher than the 0.1 ppm P limit set by the Water Framework Directive. Much of the excessive P input in surface waters is derived from agriculture. Direct P input from artificially drained fields (short-circuiting the buffering capacity of the subsoil) is suspected to be one of the major sources. We aim to develop simple and cheap filters that can be directly installed in the field to reduce P concentration from the drain water. Here we report on the performance of such filters tested at lab scale. As starting materials for the P filter, iron coated sand and acid pre-treated glauconite were used. These materials, both rich in Fe, were mixed in ratios of 75/25, 65/35, 50/50 and 0/100 (iron coated sand/glauconite ratio based on weight basis) and filled in plastic tubes. A screening experiment using the constant head method with a 0.01 M CaCl2 solution containing 1 ppm P showed that all four types of mixtures reduced the P concentration in the outflowing water to almost zero, and that the 75/25, 65/35 and 0/100 mixtures had a sufficiently large hydraulic conductivity of 0.9 to 6.0 cm/min, while the hydraulic conductivity of the 50/50 mixture was too low (< 0.4 cm/min). In a second experiment the iron coated sand and acid pre-treated glauconite were mixed in ratios of 75/25, 65/35 and 0/100 and filled in the same plastic tubes as in the first experiment. Subsequently a 0.01 M CaCl2 solution containing 1 ppm P was passed through the filters over several days, in amounts equivalent to half of the yearly water volume passing through the drains. This experiment firstly showed that in all cases the hydraulic conductivity fluctuated strongly: it decreased from 4.0-6.0 cm/min to 2.0-1.5 cm/min for the 75/25 filter, and to values < 0.4 cm/min for the 65/35 filter, whereas it increased from 0.8 to 1.4 cm/min for the 0/100 filter. Secondly, we observed a decrease in the P removal efficiency with time on each day for all filters: from 90% removal to 80% removal for the 75/25 and 65/35 filters, while for the 0/100 filter the P removal almost reduced to 0%. Based on these results the 75/25 (iron coated sand/glauconite) filter will be tested at field level, and additional research will be directed towards prediction of the evolution of hydraulic conductivity of the filter materials.
Methodology for modeling the microbial contamination of air filters.
Joe, Yun Haeng; Yoon, Ki Young; Hwang, Jungho
2014-01-01
In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter.
Duffett, L; Carrier, M
2017-01-01
Use of inferior vena cava (IVC) filters has increased dramatically in recent decades, despite a lack of evidence that their use has impacted venous thromboembolism (VTE)-related mortality. This increased use appears to be primarily driven by the insertion of retrievable filters for prophylactic indications. A growing body of evidence, however, suggests that IVC filters are frequently associated with clinically important adverse events, prompting a closer look at their role. We sought to narratively review the current evidence on the efficacy and safety of IVC filter placements. Inferior vena cava filters remain the only treatment option for patients with an acute (within 2-4 weeks) proximal deep vein thrombosis (DVT) or pulmonary embolism and an absolute contraindication to anticoagulation. In such patients, anticoagulation should be resumed and IVC filters removed as soon as the contraindication has passed. For all other indications, there is insufficient evidence to support the use of IVC filters and high-quality trials are required. In patients where an IVC filter remains, regular follow-up to reassess removal and screen for filter-related complications should occur. © 2016 International Society on Thrombosis and Haemostasis.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rambo, Patrick; Schwarz, Jens; Kimmel, Mark
We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.
Rambo, Patrick; Schwarz, Jens; Kimmel, Mark; ...
2016-09-27
We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.
Calibration and use of filter test facility orifice plates
NASA Astrophysics Data System (ADS)
Fain, D. E.; Selby, T. W.
1984-07-01
There are three official DOE filter test facilities. These test facilities are used by the DOE, and others, to test nuclear grade HEPA filters to provide Quality Assurance that the filters meet the required specifications. The filters are tested for both filter efficiency and pressure drop. In the test equipment, standard orifice plates are used to set the specified flow rates for the tests. There has existed a need to calibrate the orifice plates from the three facilities with a common calibration source to assure that the facilities have comparable tests. A project has been undertaken to calibrate these orifice plates. In addition to reporting the results of the calibrations of the orifice plates, the means for using the calibration results will be discussed. A comparison of the orifice discharge coefficients for the orifice plates used at the seven facilities will be given. The pros and cons for the use of mass flow or volume flow rates for testing will be discussed. It is recommended that volume flow rates be used as a more practical and comparable means of testing filters. The rationale for this recommendation will be discussed.
Bayesian learning for spatial filtering in an EEG-based brain-computer interface.
Zhang, Haihong; Yang, Huijuan; Guan, Cuntai
2013-07-01
Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.
Workplace Exposure to Titanium Dioxide Nanopowder Released from a Bag Filter System
Ji, Jun Ho; Kim, Jong Bum; Lee, Gwangjae; Noh, Jung-Hun; Yook, Se-Jin; Cho, So-Hye; Bae, Gwi-Nam
2015-01-01
Many researchers who use laboratory-scale synthesis systems to manufacture nanomaterials could be easily exposed to airborne nanomaterials during the research and development stage. This study used various real-time aerosol detectors to investigate the presence of nanoaerosols in a laboratory used to manufacture titanium dioxide (TiO2). The TiO2 nanopowders were produced via flame synthesis and collected by a bag filter system for subsequent harvesting. Highly concentrated nanopowders were released from the outlet of the bag filter system into the laboratory. The fractional particle collection efficiency of the bag filter system was only 20% at particle diameter of 100 nm, which is much lower than the performance of a high-efficiency particulate air (HEPA) filter. Furthermore, the laboratory hood system was inadequate to fully exhaust the air discharged from the bag filter system. Unbalanced air flow rates between bag filter and laboratory hood systems could result in high exposure to nanopowder in laboratory settings. Finally, we simulated behavior of nanopowders released in the laboratory using computational fluid dynamics (CFD). PMID:26125024
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
Nicolas, M; Malvé, M; Peña, E; Martínez, M A; Leask, R
2015-02-05
In this study, the trapping ability of the Günther Tulip and Celect inferior vena cava filters was evaluated. Thrombus capture rates of the filters were tested in vitro in horizontal position with thrombus diameters of 3 and 6mm and tube diameter of 19mm. The filters were tested in centered and tilted positions. Sets of 30 clots were injected into the model and the same process was repeated 20 times for each different condition simulated. Pressure drop experienced along the system was also measured and the percentage of clots captured was recorded. The Günther Tulip filter showed superiority in all cases, trapping almost 100% of 6mm clots both in an eccentric and tilted position and trapping 81.7% of the 3mm clots in a centered position and 69.3% in a maximum tilted position. The efficiency of all filters tested decreased as the size of the embolus decreased and as the filter was tilted. The injection of 6 clots raised the pressure drop to 4.1mmHg, which is a reasonable value that does not cause the obstruction of blood flow through the system. Copyright © 2014 Elsevier Ltd. All rights reserved.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
A Comparison of Filter-based Approaches for Model-based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai
2012-01-01
Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.
Developing particulate thin filter using coconut fiber for motor vehicle emission
NASA Astrophysics Data System (ADS)
Wardoyo, A. Y. P.; Juswono, U. P.; Riyanto, S.
2016-03-01
Amounts of motor vehicles in Indonesia have been recognized a sharply increase from year to year with the increment reaching to 22 % per annum. Meanwhile motor vehicles produce particulate emissions in different sizes with high concentrations depending on type of vehicles, fuels, and engine capacity. Motor Particle emissions are not only to significantly contribute the atmosphric particles but also adverse to human health. In order to reduce the particle emission, it is needed a filter. This study was aimed to develop a thin filter using coconut fiber to reduce particulate emissions for motor vehicles. The filter was made of coconut fibers that were grinded into power and mixed with glues. The filter was tested by the measurements of particle concentrations coming out from the vehicle exhaust directly and the particle concentrations after passing through the filter. The efficiency of the filter was calculated by ratio of the particle concentrations before comming in the filter to the particle conentrations after passing through the filter. The results showed that the efficiency of the filter obtained more than 30 %. The efficiency increases sharply when a number of the filters are arranged paralelly.
Method for enhanced longevity of in situ microbial filter used for bioremediation
Carman, M.L.; Taylor, R.T.
1999-03-30
An improved method is disclosed for in situ microbial filter bioremediation having increasingly operational longevity of an in situ microbial filter emplaced into an aquifer. A method is presented for generating a microbial filter of sufficient catalytic density and thickness, which has increased replenishment interval, improved bacteria attachment and detachment characteristics and the endogenous stability under in situ conditions. A system is also disclosed for in situ field water remediation. 31 figs.
Wang, M D; Reed, C M; Bilger, R C
1978-03-01
It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.
Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
Ray, N J; Fowler, S; Stein, J F
2005-04-01
The magnocellular system plays an important role in visual motion processing, controlling vergence eye movements, and in reading. Yellow filters may boost magnocellular activity by eliminating inhibitory blue input to this pathway. It was found that wearing yellow filters increased motion sensitivity, convergence, and accommodation in many children with reading difficulties, both immediately and after three months using the filters. Motion sensitivity was not increased using control neutral density filters. Moreover, reading-impaired children showed significant gains in reading ability after three months wearing the filters compared with those who had used a placebo. It was concluded that yellow filters can improve magnocellular function permanently. Hence, they should be considered as an alternative to corrective lenses, prisms, or exercises for treating poor convergence and accommodation, and also as an aid for children with reading problems.
A Bayesian Approach to Period Searching in Solar Coronal Loops
NASA Astrophysics Data System (ADS)
Scherrer, Bryan; McKenzie, David
2017-03-01
We have applied a Bayesian generalized Lomb-Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Program #129C: “MHD Wave Heating (Thin Filters),” which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2″ per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1″ per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.
A Bayesian Approach to Period Searching in Solar Coronal Loops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Bryan; McKenzie, David
2017-03-01
We have applied a Bayesian generalized Lomb–Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Programmore » 129C: “MHD Wave Heating (Thin Filters),” which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2″ per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1″ per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.« less
Why relevant chemical information cannot be exchanged without disclosing structures
NASA Astrophysics Data System (ADS)
Filimonov, Dmitry; Poroikov, Vladimir
2005-09-01
Both society and industry are interested in increasing the safety of pharmaceuticals. Potentially dangerous compounds could be filtered out at early stages of R&D by computer prediction of biological activity and ADMET characteristics. Accuracy of such predictions strongly depends on the quality & quantity of information contained in a training set. Suggestion that some relevant chemical information can be added to such training sets without disclosing chemical structures was generated at the recent ACS Symposium. We presented arguments that such safety exchange of relevant chemical information is impossible. Any relevant information about chemical structures can be used for search of either a particular compound itself or its close analogues. Risk of identifying such structures is enough to prevent pharma industry from relevant chemical information exchange.
Klein, M.; Mohr, J. J.; Desai, S.; ...
2017-11-14
We describe a multi-component matched filter cluster confirmation tool (MCMF) designed for the study of large X-ray source catalogs produced by the upcoming X-ray all-sky survey mission eROSITA. We apply the method to confirm a sample of 88 clusters with redshifts $0.05
Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui
2013-01-01
Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.
2017-01-09
intensifier with a Semrock filter (FF01-425/26). The reflective surface of this dichroic mirror rejected the blue light portion from the broadband...chemiluminescence was also imaged using a HiCATT intensifier with a Semrock filter (FF01-320/40). The shadowgraph camera was set to a gate of 7 µs
NASA Technical Reports Server (NTRS)
Mottola, Stefano; Dimartino, M.; Gonano-Beurer, M.; Hoffmann, H.; Neukum, G.
1992-01-01
This paper reports the observations of 951 Gaspra carried out at the European Southern Observatory (La Silla, Chile) during the 1991 apparition, using the DLR CCD Camera equipped with a spare set of the Galileo SSI filters. Time-resolved spectrophotometric measurements are presented. The occurrence of spectral variations with rotation suggests the presence of surface variegation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, M.; Mohr, J. J.; Desai, S.
We describe a multi-component matched filter cluster confirmation tool (MCMF) designed for the study of large X-ray source catalogs produced by the upcoming X-ray all-sky survey mission eROSITA. We apply the method to confirm a sample of 88 clusters with redshifts $0.05
NASA Astrophysics Data System (ADS)
Borncamp, David
2017-08-01
The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.
NASA Astrophysics Data System (ADS)
Borncamp, David
2016-10-01
The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.
Electrically heated particulate filter diagnostic systems and methods
Gonze, Eugene V [Pinckney, MI
2009-09-29
A system that diagnoses regeneration of an electrically heated particulate filter is provided. The system generally includes a grid module that diagnoses a fault of the grid based on at least one of a current signal and a voltage signal. A diagnostic module at least one of sets a fault status and generates a warning signal based on the fault of the grid.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
Filtration Efficiency of Functionalized Ceramic Foam Filters for Aluminum Melt Filtration
NASA Astrophysics Data System (ADS)
Voigt, Claudia; Jäckel, Eva; Taina, Fabio; Zienert, Tilo; Salomon, Anton; Wolf, Gotthard; Aneziris, Christos G.; Le Brun, Pierre
2017-02-01
The influence of filter surface chemistry on the filtration efficiency of cast aluminum alloys was evaluated for four different filter coating compositions (Al2O3—alumina, MgAl2O4—spinel, 3Al2O3·2SiO2—mullite, and TiO2—rutile). The tests were conducted on a laboratory scale with a filtration pilot plant, which facilitates long-term filtration tests (40 to 76 minutes). This test set-up allows the simultaneous use of two LiMCAs (before and after the filter) for the determination of the efficiency of inclusion removal. The four tested filter surface chemistries exhibited good thermal stability and mechanical robustness after 750 kg of molten aluminum had been cast. All four filter types exhibited a mean filtration efficiency of at least 80 pct. However, differences were also observed. The highest filtration efficiencies were obtained with alumina- and spinel-coated filter surfaces (>90 pct), and the complete removal of the largest inclusions (>90 µm) was observed. The efficiency was slightly lower with mullite- and rutile-coated filter surfaces, in particular for large inclusions. These observations are discussed in relation to the properties of the filters, in particular in terms of, for example, the surface roughness.
Using the NEMA NU 4 PET image quality phantom in multipinhole small-animal SPECT.
Harteveld, Anita A; Meeuwis, Antoi P W; Disselhorst, Jonathan A; Slump, Cornelis H; Oyen, Wim J G; Boerman, Otto C; Visser, Eric P
2011-10-01
Several commercial small-animal SPECT scanners using multipinhole collimation are presently available. However, generally accepted standards to characterize the performance of these scanners do not exist. Whereas for small-animal PET, the National Electrical Manufacturers Association (NEMA) NU 4 standards have been defined in 2008, such standards are still lacking for small-animal SPECT. In this study, the image quality parameters associated with the NEMA NU 4 image quality phantom were determined for a small-animal multipinhole SPECT scanner. Multiple whole-body scans of the NEMA NU 4 image quality phantom of 1-h duration were performed in a U-SPECT-II scanner using (99m)Tc with activities ranging between 8.4 and 78.2 MBq. The collimator contained 75 pinholes of 1.0-mm diameter and had a bore diameter of 98 mm. Image quality parameters were determined as a function of average phantom activity, number of iterations, postreconstruction spatial filter, and scatter correction. In addition, a mouse was injected with (99m)Tc-hydroxymethylene diphosphonate and was euthanized 6.5 h after injection. Multiple whole-body scans of this mouse of 1-h duration were acquired for activities ranging between 3.29 and 52.7 MBq. An increase in the number of iterations was accompanied by an increase in the recovery coefficients for the small rods (RC(rod)), an increase in the noise in the uniform phantom region, and a decrease in spillover ratios for the cold-air- and water-filled scatter compartments (SOR(air) and SOR(wat)). Application of spatial filtering reduced image noise but lowered RC(rod). Filtering did not influence SOR(air) and SOR(wat). Scatter correction reduced SOR(air) and SOR(wat). The effect of total phantom activity was primarily seen in a reduction of image noise with increasing activity. RC(rod), SOR(air), and SOR(wat) were more or less constant as a function of phantom activity. The relation between acquisition and reconstruction settings and image quality was confirmed in the (99m)Tc-hydroxymethylene diphosphonate mouse scans. Although developed for small-animal PET, the NEMA NU 4 image quality phantom was found to be useful for small-animal SPECT as well, allowing for objective determination of image quality parameters and showing the trade-offs between several of these parameters on variation of acquisition and reconstruction settings.
NASA Astrophysics Data System (ADS)
Lhamon, Michael Earl
A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M. R.; Burket, P. R.; Duignan, M. R.
2015-03-12
The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRRmore » was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO 2, and NaNO 3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.« less
Improving the retrieval rate of inferior vena cava filters with a multidisciplinary team approach.
Inagaki, Elica; Farber, Alik; Eslami, Mohammad H; Siracuse, Jeffrey J; Rybin, Denis V; Sarosiek, Shayna; Sloan, J Mark; Kalish, Jeffrey
2016-07-01
The option to retrieve inferior vena cava (IVC) filters has resulted in an increase in the utilization of these devices as stopgap measures in patients with relative contraindications to anticoagulation. These retrievable IVC filters, however, are often not retrieved and become permanent. Recent data from our institution confirmed a historically low retrieval rate. Therefore, we hypothesized that the implementation of a new IVC filter retrieval protocol would increase the retrieval rate of appropriate IVC filters at our institution. All consecutive patients who underwent an IVC filter placement at our institution between September 2003 and July 2012 were retrospectively reviewed. In August 2012, a multidisciplinary task force was established, and a new IVC filter retrieval protocol was implemented. Prospective data were collected using a centralized interdepartmental IVC filter registry for all consecutive patients who underwent an IVC filter placement between August 2012 and September 2014. Patients were chronologically categorized into preimplementation (PRE) and postimplementation (POST) groups. Comparisons of outcome measures, including the retrieval rate of IVC filters along with rates of retrieval attempt and technical failure, were made between the two groups. In the PRE and POST groups, a total of 720 and 74 retrievable IVC filters were implanted, respectively. In the POST group, 40 of 74 filters (54%) were successfully retrieved compared with 82 of 720 filters (11%) in the PRE group (P < .001). Furthermore, a greater number of IVC filter retrievals were attempted in the POST group than in the PRE group (66% vs 14%; P < .001). No significant difference was observed between the PRE and POST groups for technical failure (17% vs 18%; P = .9). The retrieval rate of retrievable IVC filters at our institution was significantly increased with the implementation of a new IVC filter retrieval protocol with a multidisciplinary team approach. This improved retrieval rate is possible with minimal dedication of resources and can potentially lead to a decrease in IVC filter-related complications in the future. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien
2018-07-01
This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.
Heart block and cardiac embolization of fractured inferior vena cava filter.
Abudayyeh, Islam; Takruri, Yessar; Weiner, Justin B
2016-01-01
A 66-year-old man underwent a placement of an inferior vena cava filter before a gastric surgery 9 years prior, presented to the emergency room with a complete atrioventricular block. Chest x-ray and transthoracic echocardiogram showed struts migrating to right ventricle with tricuspid regurgitation. Cardiothoracic surgery was consulted and declined an open surgical intervention due to the location of the embolized fragments and the patient's overall condition. It was also felt that the fragments had migrated chronically and were adhered to the cardiac structures. The patient underwent a dual-chamber permanent pacemaker implantation. Post-implant fluoroscopy showed no displacement of the inferior vena cava filter struts due to the pacemaker leads indicating that the filter fracture had likely been a chronic process. This case highlights a rare combination of complications related to inferior vena cava filter fractures and the importance of assessing for such fractures in chronic placements. Inferior vena cava filter placement for a duration greater than 1 month can be associated with filter fractures and strut migration which may lead to, although rare, serious or fatal complications such as complete atrioventricular conduction system disruption and valvular damage including significant tricuspid regurgitation. Assessing for inferior vena cava filter fractures in chronic filter placement is important to avoid such complications. When possible, retrieval of the filter should be considered in all patients outside the acute setting in order to avoid filter-related complications. Filter retrieval rates remain low even when a retrievable filter is in place and the patient no longer has a contraindication to anticoagulation.
Preliminary Mechanical Characterization of Thermal Filters for the X-IFU Instrument on Athena
NASA Astrophysics Data System (ADS)
Barbera, Marco; Lo Cicero, Ugo; Sciortino, Luisa; Parodi, Giancarlo; D'Anca, Fabio; Giglio, Paolo; Ferruggia Bonura, Salvatore; Nuzzo, Flavio; Jimenez Escobar, Antonio; Ciaravella, Angela; Collura, Alfonso; Varisco, Salvatore; Samain, Valerie
2018-05-01
The X-ray Integral Field Unit (X-IFU) is one of the two instruments of the Athena astrophysics space mission approved by ESA in the Cosmic Vision Science Program. The X-IFU consists of a large array of TES microcalorimeters that will operate at 50 mK inside a sophisticated cryostat. A set of thin filters, highly transparent to X-rays, will be mounted on the cryostat thermal shields in order to attenuate the IR radiative load, to attenuate RF electromagnetic interferences, and to protect the detector from contamination. In this paper, we present the current thermal filters design, describe the filter samples developed/procured so far, and present preliminary results from the ongoing characterization tests.
Galetti, Valeria; Kujinga, Prosper; Mitchikpè, Comlan Evariste S; Zeder, Christophe; Tay, Fabian; Tossou, Félicien; Hounhouigan, Joseph D; Zimmermann, Michael B; Moretti, Diego
2015-11-01
Zinc deficiency and contaminated water are major contributors to diarrhea in developing countries. Food fortification with zinc has not shown clear benefits, possibly because of low zinc absorption from inhibitory food matrices. We used a novel point-of-use water ultrafiltration device configured with glass zinc plates to produce zinc-fortified, potable water. The objective was to determine zinc bioavailability from filtered water and the efficacy of zinc-fortified water in improving zinc status. In a crossover balanced study, we measured fractional zinc absorption (FAZ) from the zinc-fortified water in 18 healthy Swiss adults using zinc stable isotopes and compared it with zinc-fortified maize porridge. We conducted a 20-wk double-blind randomized controlled trial (RCT) in 277 Beninese school children from rural settings who were randomly assigned to receive a daily portion of zinc-fortified filtered water delivering 2.8 mg Zn (Zn+filter), nonfortified filtered water (Filter), or nonfortified nonfiltered water (Pump) from the local improved supply, acting as the control group. The main outcome was plasma zinc concentration (PZn), and the 3 groups were compared by using mixed-effects models. Secondary outcomes were prevalence of zinc deficiency, diarrhea prevalence, and growth. Geometric mean (-SD, +SD) FAZ was 7-fold higher from fortified water (65.9%; 42.2, 102.4) than from fortified maize (9.1%; 6.0, 13.7; P < 0.001). In the RCT, a significant time-by-treatment effect on PZn (P = 0.026) and on zinc deficiency (P = 0.032) was found; PZn in the Zn+filter group was significantly higher than in the Filter (P = 0.006) and Pump (P = 0.025) groups. We detected no effect on diarrhea or growth, but our study did not have the duration and power to detect such effects. Consumption of filtered water fortified with a low dose of highly bioavailable zinc is an effective intervention in children from rural African settings. Large community-based trials are needed to assess the effectiveness of zinc-fortified filtered water on diarrhea and growth. These trials were registered at clinicaltrials.gov as NCT01636583 and NCT01790321. © 2015 American Society for Nutrition.
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
Virtual strain gage size study
Reu, Phillip L.
2015-09-22
DIC is a non-linear low-pass spatial filtering operation; whether we consider the effect of the subset and shape function, the strain window used in the strain calculation, of other post-processing of the results, each decision will impact the spatial resolution, of the measurement. More fundamentally, the speckle size limits, the spatial resolution by dictating the smallest possible subset. After this decision the processing settings are controlled by the allowable noise level balanced by possible bias errors created by the data filtering. This article describes a process to determine optimum DIC software settings to determine if the peak displacements or strainsmore » are being found.« less
System and Method for Providing a Real Time Audible Message to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Inventor); Lachter, Joel B. (Inventor); Koteskey, Robert W. (Inventor); Battiste, Vernol (Inventor)
2016-01-01
A system and method for providing information to a crew of the aircraft while in-flight. The system includes a module having: a receiver for receiving a message while in-flight; a filter having a set of screening parameters and operative to filter the message based on the set of screening parameters; and a converter for converting the message into an audible message. The message includes a pilot report having at least one of weather information, separation information, congestion information, flight deviation information and destination information. The message is sent to the aircraft by another aircraft or an air traffic controller.
3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.
Bosnjak, A; Montilla, G; Villegas, R; Jara, I
2007-01-01
This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.
Neill, Matthew; Charles, Hearns W; Pflager, Daniel; Deipolyi, Amy R
2017-01-01
We sought to delineate factors of inferior vena cava filter placement associated with increased radiation and cost and difficult subsequent retrieval. In total, 299 procedures from August 2013 to December 2014, 252 in a fluoroscopy suite (FS) and 47 in the operating room (OR), were reviewed for radiation exposure, fluoroscopy time, filter type, and angulation. The number of retrieval devices and fluoroscopy time needed for retrieval were assessed. Multiple linear regression assessed the impact of filter type, procedure location, and patient and procedural variables on radiation dose, fluoroscopy time, and filter angulation. Logistic regression assessed the impact of filter angulation, type, and filtration duration on retrieval difficulty. Access site and filter type had no impact on radiation exposure. However, placement in the OR, compared to the FS, entailed more radiation (156.3 vs 71.4 mGy; P = 0.001), fluoroscopy time (6.1 vs 2.8 min; P < 0.001), and filter angulation (4.8° vs 2.6°; P < 0.001). Angulation was primarily dependent on filter type ( P = 0.02), with VenaTech and Denali filters associated with decreased angulation (2.2°, 2.4°) and Option filters associated with greater angulation (4.2°). Filter angulation, but not filter type or filtration duration, predicted cases requiring >1 retrieval device ( P < 0.001) and >30 min fluoroscopy time ( P = 0.02). Cost savings for placement in the FS vs OR were estimated at $444.50 per case. In conclusion, increased radiation and cost were associated with placement in the OR. Filter angulation independently predicted difficult filter retrieval; angulation was determined by filter type. Performing filter placement in the FS using specific filters may reduce radiation and cost while enabling future retrieval.
Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering
Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider
2010-01-01
Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894
A Kalman filter for a two-dimensional shallow-water model
NASA Technical Reports Server (NTRS)
Parrish, D. F.; Cohn, S. E.
1985-01-01
A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Wiener Chaos and Nonlinear Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lototsky, S.V.
2006-11-15
The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less
Alpers, Charles N.; Fleck, Jacob A.; Marvin-DiPasquale, Mark C.; Stricker, Craig A.; Stephenson, Mark; Taylor, Howard E.
2014-01-01
The seasonal and spatial variability of water quality, including mercury species, was evaluated in agricultural and managed, non-agricultural wetlands in the Yolo Bypass Wildlife Area, an area managed for multiple beneficial uses including bird habitat and rice farming. The study was conducted during an 11-month period (June 2007 to April 2008) that included a summer growing season and flooded conditions during winter. Methylmercury (MeHg) concentrations in surface water varied over a wide range (0.1 to 37 ng L−1 unfiltered; 0.04 to 7.3 ng L−1 filtered). Maximum MeHg values are among the highest ever recorded in wetlands. Highest MeHg concentrations in unfiltered surface water were observed in drainage from wild rice fields during harvest (September 2007), and in white rice fields with decomposing rice straw during regional flooding (February 2008). The ratio of MeHg to total mercury (MeHg/THg) increased about 20-fold in both unfiltered and filtered water during the growing season (June to August 2007) in the white and wild rice fields, and about 5-fold in fallow fields (July to August 2007), while there was little to no change in MeHg/THg in the permanent wetland. Sulfate-bearing fertilizer had no effect on Hg(II) methylation, as sulfate-reducing bacteria were not sulfate limited in these agricultural wetlands. Concentrations of MeHg in filtered and unfiltered water correlated with filtered Fe, filtered Mn, DOC, and two indicators of sulfate reduction: the SO4 2 −/Cl− ratio, and δ34S in aqueous sulfate. These relationships suggest that microbial reduction of SO4 2−, Fe(III), and possibly Mn(IV) may contribute to net Hg(II)-methylation in this setting.
[Filtering facepieces: effect of oily aerosol load on penetration through the filtering material].
Plebani, Carmela; Listrani, S; Di Luigi, M
2010-01-01
Electrostatic filters are widely used in applications requiring high filtration efficiency and low pressure drop. However various studies showed that the penetration through electrostatic filters increases during exposure to an aerosol flow. This study investigates the effects of prolonged exposure to an oily aerosol on the penetration through filtering facepieces available on the market. Some samples of FFP1, FFP2 and FFP3 filtering facepieces were exposed for 8 hours consecutively to a paraffin oil polydisperse aerosol. At the end of the exposure about 830 mg of paraffin oil were deposited in the facepiece. All the examined facepieces showed penetration values that increased with paraffin oil load while pressure drop values were substantially the same before and after exposure. The measured maximum penetration values did not exceed the maximum penetration values allowed by the European technical standards, except in one case. According to the literature, 830 mg of oil load in a facepiece is not feasible in workplaces over an eight- hour shift. However, the trend of the penetration versus exposure mass suggests that if the load increases, the penetration may exceed the maximum allowed values. For comparison a mechanical filter was also studied. This showed an initial pressure drop higher than FFP2 filtering facepieces characterized by comparable penetration values. During exposure the pressure drop virtually doubled while penetration did not change. The increase in penetration with no increase in pressure drop in the analyzed facepieces indicates that it is necessary to comply with the information supplied by the manufacturer that restricts their use to a single shift.
NASA Astrophysics Data System (ADS)
Fadeyi, M. O.; Weschler, C. J.; Tham, K. W.
This study examined the impact of recirculation rates (7 and 14 h -1), ventilation rates (1 and 2 h -1), and filtration on secondary organic aerosols (SOAs) generated by ozone of outdoor origin reacting with limonene of indoor origin. Experiments were conducted within a recirculating air handling system that serviced an unoccupied, 236 m 3 environmental chamber configured to simulate an office; either no filter, a new filter or a used filter was located downstream of where outdoor air mixed with return air. For otherwise comparable conditions, the SOA number and mass concentrations at a recirculation rate of 14 h -1 were significantly smaller than at a recirculation rate of 7 h -1. This was due primarily to lower ozone concentrations, resulting from increased surface removal, at the higher recirculation rate. Increased ventilation increased outdoor-to-indoor transport of ozone, but this was more than offset by the increased dilution of SOA derived from ozone-initiated chemistry. The presence of a particle filter (new or used) strikingly lowered SOA number and mass concentrations compared with conditions when no filter was present. Even though the particle filter in this study had only 35% single-pass removal efficiency for 100 nm particles, filtration efficiency was greatly amplified by recirculation. SOA particle levels were reduced to an even greater extent when an activated carbon filter was in the system, due to ozone removal by the carbon filter. These findings improve our understanding of the influence of commonly employed energy saving procedures on occupant exposures to ozone and ozone-derived SOA.
Tunable Microwave Filter Design Using Thin-Film Ferroelectric Varactors
NASA Astrophysics Data System (ADS)
Haridasan, Vrinda
Military, space, and consumer-based communication markets alike are moving towards multi-functional, multi-mode, and portable transceiver units. Ferroelectric-based tunable filter designs in RF front-ends are a relatively new area of research that provides a potential solution to support wideband and compact transceiver units. This work presents design methodologies developed to optimize a tunable filter design for system-level integration, and to improve the performance of a ferroelectric-based tunable bandpass filter. An investigative approach to find the origins of high insertion loss exhibited by these filters is also undertaken. A system-aware design guideline and figure of merit for ferroelectric-based tunable band- pass filters is developed. The guideline does not constrain the filter bandwidth as long as it falls within the range of the analog bandwidth of a system's analog to digital converter. A figure of merit (FOM) that optimizes filter design for a specific application is presented. It considers the worst-case filter performance parameters and a tuning sensitivity term that captures the relation between frequency tunability and the underlying material tunability. A non-tunable parasitic fringe capacitance associated with ferroelectric-based planar capacitors is confirmed by simulated and measured results. The fringe capacitance is an appreciable proportion of the tunable capacitance at frequencies of X-band and higher. As ferroelectric-based tunable capac- itors form tunable resonators in the filter design, a proportionally higher fringe capacitance reduces the capacitance tunability which in turn reduces the frequency tunability of the filter. Methods to reduce the fringe capacitance can thus increase frequency tunability or indirectly reduce the filter insertion-loss by trading off the increased tunability achieved to lower loss. A new two-pole tunable filter topology with high frequency tunability (> 30%), steep filter skirts, wide stopband rejection, and constant bandwidth is designed, simulated, fabricated and measured. The filters are fabricated using barium strontium titanate (BST) varactors. Electromagnetic simulations and measured results of the tunable two-pole ferroelectric filter are analyzed to explore the origins of high insertion loss in ferroelectric filters. The results indicate that the high-permittivity of the BST (a ferroelectric) not only makes the filters tunable and compact, but also increases the conductive loss of the ferroelectric-based tunable resonators which translates into high insertion loss in ferroelectric filters.
Park, Jae Hong; Yoon, Ki Young; Na, Hyungjoo; Kim, Yang Seon; Hwang, Jungho; Kim, Jongbaeg; Yoon, Young Hun
2011-09-01
We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1 μm) were used as the test aerosol particles, and their number concentration was measured using a scanning mobility particle sizer. Antibacterial tests were performed using the colony counting method, and Escherichia coli (E. coli) was used as the test bacteria. The results showed that the CNT deposition increased the filtration efficiency of nano and submicron-sized particles, but did not increase the pressure drop across the filter. When a pristine glass fiber filter that had no CNTs was used, the particle filtration efficiencies at particle sizes under 30 nm and near 500 nm were 48.5% and 46.8%, respectively. However, the efficiencies increased to 64.3% and 60.2%, respectively, when the CNT-deposited filter was used. The reduction in the number of viable cells was determined by counting the colony forming units (CFU) of each test filter after contact with the cells. The pristine glass fiber filter was used as a control, and 83.7% of the E. coli were inactivated on the CNT-deposited filter. Copyright © 2011 Elsevier B.V. All rights reserved.
Gender classification system in uncontrolled environments
NASA Astrophysics Data System (ADS)
Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei
2011-01-01
Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
BMP FILTERS: UPFLOW VS. DOWNFLOW
Stormwater filters are typically operated in a downflow mode. This research had two objectives: 1) to determine the increased life of a filter operated in an upflow mode, and 2) to determine if the operation of a downflow, mixed-media filter could be modeled using the power equat...
A decentralized square root information filter/smoother
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Belzer, M. R.
1985-01-01
A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).
Correlation Filtering of Modal Dynamics using the Laplace Wavelet
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.
1997-01-01
Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.
bioalcidae, samjs and vcffilterjs: object-oriented formatters and filters for bioinformatics files.
Lindenbaum, Pierre; Redon, Richard
2018-04-01
Reformatting and filtering bioinformatics files are common tasks for bioinformaticians. Standard Linux tools and specific programs are usually used to perform such tasks but there is still a gap between using these tools and the programming interface of some existing libraries. In this study, we developed a set of tools namely bioalcidae, samjs and vcffilterjs that reformat or filter files using a JavaScript engine or a pure java expression and taking advantage of the java API for high-throughput sequencing data (htsjdk). https://github.com/lindenb/jvarkit. pierre.lindenbaum@univ-nantes.fr.
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
Design of tunable thermo-optic C-band filter based on coated silicon slab
NASA Astrophysics Data System (ADS)
Pinhas, Hadar; Malka, Dror; Danan, Yossef; Sinvani, Moshe; Zalevsky, Zeev
2018-03-01
Optical filters are required to have narrow band-pass filtering in the spectral C-band for applications such as signal tracking, sub-band filtering or noise suppression. These requirements lead to a variety of filters such as Mach-Zehnder interferometer inter-leaver in silica, which offer thermo-optic effect for optical switching, however, without proper thermal and optical efficiency. In this paper we propose tunable thermo-optic filtering device based on coated silicon slab resonator with increased Q-factor for the C-band optical switching. The device can be designed either for long range wavelength tuning of for short range with increased wavelength resolution. Theoretical examination of the thermal parameters affecting the filtering process is shown together with experimental results. Proper channel isolation with an extinction ratio of 20dBs is achieved with spectral bandpass width of 0.07nm.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Wagner, Florian B; Nielsen, Peter Borch; Boe-Hansen, Rasmus; Albrechtsen, Hans-Jørgen
2016-05-15
Incomplete nitrification in biological filters during drinking water treatment is problematic, as it compromises drinking water quality. Nitrification problems can be caused by a lack of nutrients for the nitrifying microorganisms. Since copper is an important element in one of the essential enzymes in nitrification, we investigated the effect of copper dosing on nitrification in different biological rapid sand filters treating groundwater. A lab-scale column assay with filter material from a water works demonstrated that addition of a trace metal mixture, including copper, increased ammonium removal compared to a control without addition. Subsequently, another water works was investigated in full-scale, where copper influent concentrations were below 0.05 μg Cu L(-1) and nitrification was incomplete. Copper dosing of less than 5 μg Cu L(-1) to a full-scale filter stimulated ammonium removal within one day, and doubled the filter's removal from 0.22 to 0.46 g NH4-N m(-3) filter material h(-1) within 20 days. The location of ammonium and nitrite oxidation shifted upwards in the filter, with an almost 14-fold increase in ammonium removal rate in the filter's top 10 cm, within 57 days of dosing. To study the persistence of the stimulation, copper was dosed to another filter at the water works for 42 days. After dosing was stopped, nitrification remained complete for at least 238 days. Filter effluent concentrations of up to 1.3 μg Cu L(-1) confirmed that copper fully penetrated the filters, and determination of copper content on filter media revealed a buildup of copper during dosing. The amount of copper stored on filter material gradually decreased after dosing stopped; however at a slower rate than it accumulated. Continuous detection of copper in the filter effluent confirmed a release of copper to the bulk phase. Overall, copper dosing to poorly performing biological rapid sand filters increased ammonium removal rates significantly, achieving effluent concentrations of below 0.01 mg NH4-N L(-1), and had a long-term effect on nitrification performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecil, L.D.; Knobel, L.L.; Wegner, S.J.
1989-09-01
From 1952 to 1988, about 140 curies of strontium-90 have been discharged in liquid waste to disposal ponds and wells at the INEL (Idaho National Engineering Laboratory). The US Geological Survey routinely samples ground water from the Snake River Plain aquifer and from discontinuous perched-water zones for selected radionuclides, major and minor ions, and chemical and physical characteristics. Water samples for strontium-90 analyses collected in the field are unfiltered and preserved to an approximate 2-percent solution with reagent-grade hydrochloric acid. Water from four wells completed in the Snake River Plain aquifer was sampled as part of the US Geological Survey'smore » quality-assurance program to evaluate the effect of filtration and preservation methods on strontium-90 concentrations in ground water at the INEL. The wells were selected for sampling on the basis of historical concentrations of strontium-90 in ground water. Water from each well was filtered through either a 0.45- or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered water samples were collected at each well. One set of water samples was preserved in the field to an approximate 2-percent solution with reagent-grade hydrochloric acid and the other set of samples was not acidified. 13 refs., 2 figs., 6 tabs.« less
Filter penetration and breathing resistance evaluation of respirators and dust masks.
Ramirez, Joel; O'Shaughnessy, Patrick
2017-02-01
The primary objective of this study was to compare the filter performance of a representative selection of uncertified dust masks relative to the filter performance of a set of NIOSH-approved N95 filtering face-piece respirators (FFRs). Five different models of commercially available dust masks were selected for this study. Filter penetration of new dust masks was evaluated against a sodium chloride aerosol. Breathing resistance (BR) of new dust masks and FFRs was then measured for 120 min while challenging the dust masks and FFRs with Arizona road dust (ARD) at 25°C and 30% relative humidity. Results demonstrated that a wide range of maximum filter penetration was observed among the dust masks tested in this study (3-75% at the most penetrating particle size (p < 0.001). The breathing resistances of the unused FFRs and dust masks did not vary greatly (8-13 mm H 2 O) but were significantly different (p < 0.001). After dust loading there was a significant difference between the BR caused by the ARD dust layer on each FFR and dust mask. Microscopic analysis of the external layer of each dust mask and FFR suggests that different collection media in the external layer influences the development of the dust layer and therefore affects the increase in BR differently between the tested models. Two of the dust masks had penetration values < 5% and quality factors (0.26 and 0.33) comparable to those obtained for the two FFRs (0.23 and 0.31). However, the remaining three dust masks, those with penetration > 15%, had quality factors ranging between 0.04-0.15 primarily because their initial BR remained relatively high. These results indicate that some dust masks analysed during this research did not have an expected very low BR to compensate for their high penetration.
Burge, Johannes
2017-01-01
Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and neurophysiological performance, we expect that task-specific methods for feature learning like AMA will become increasingly important. PMID:28178266
Ensembles of adaptive spatial filters increase BCI performance: an online evaluation
NASA Astrophysics Data System (ADS)
Sannelli, Claudia; Vidaurre, Carmen; Müller, Klaus-Robert; Blankertz, Benjamin
2016-08-01
Objective: In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain-computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. Approach: Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. Main results: The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. Significance: CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI inefficiency to one-fourth in comparison to previous non-adaptive paradigms.
Ensembles of adaptive spatial filters increase BCI performance: an online evaluation.
Sannelli, Claudia; Vidaurre, Carmen; Müller, Klaus-Robert; Blankertz, Benjamin
2016-08-01
In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain-computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI inefficiency to one-fourth in comparison to previous non-adaptive paradigms.
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide–gated channels
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun
2015-01-01
Calcium permeability and the concomitant calcium block of monovalent ion current (“Ca2+ block”) are properties of cyclic nucleotide–gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca2+ block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca2+ block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca2+ block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca2+ block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca2+ block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca2+ block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca2+ block. PMID:26283200
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide-gated channels.
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun; Jiang, Youxing
2015-09-01
Calcium permeability and the concomitant calcium block of monovalent ion current ("Ca(2+) block") are properties of cyclic nucleotide-gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca(2+) block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca(2+) block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca(2+) block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca(2+) block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca(2+) block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca(2+) block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca(2+) block. © 2015 Lam et al.
Methodology for Modeling the Microbial Contamination of Air Filters
Joe, Yun Haeng; Yoon, Ki Young; Hwang, Jungho
2014-01-01
In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter. PMID:24523908
Optimizing Fungal DNA Extraction Methods from Aerosol Filters
NASA Astrophysics Data System (ADS)
Jimenez, G.; Mescioglu, E.; Paytan, A.
2016-12-01
Fungi and fungal spores can be picked up from terrestrial ecosystems, transported long distances, and deposited into marine ecosystems. It is important to study dust-borne fungal communities, because they can stay viable and effect the ambient microbial populations, which are key players in biogeochemical cycles. One of the challenges of studying dust-borne fungal populations is that aerosol samples contain low biomass, making extracting good quality DNA very difficult. The aim of this project was to increase DNA yield by optimizing DNA extraction methods. We tested aerosol samples collected from Haifa, Israel (polycarbonate filter), Monterey Bay, CA (quartz filter) and Bermuda (quartz filter). Using the Qiagen DNeasy Plant Kit, we tested the effect of altering bead beating times and incubation times, adding three freeze and thaw steps, initially washing the filters with buffers for various lengths of time before using the kit, and adding a step with 30 minutes of sonication in 65C water. Adding three freeze/thaw steps, adding a sonication step, washing with a phosphate buffered saline overnight, and increasing incubation time to two hours, in that order, resulted in the highest increase in DNA for samples from Israel (polycarbonate). DNA yield of samples from Monterey (quart filter) increased about 5 times when washing with buffers overnight (phosphate buffered saline and potassium phophate buffer), adding a sonication step, and adding three freeze and thaw steps. Samples collected in Bermuda (quartz filter) had the highest increase in DNA yield from increasing incubation to 2 hours, increasing bead beating time to 6 minutes, and washing with buffers overnight (phosphate buffered saline and potassium phophate buffer). Our results show that DNA yield can be increased by altering various steps of the Qiagen DNeasy Plant Kit protocol, but different types of filters collected at different sites respond differently to alterations. These results can be used as preliminary results to continue developing fungi DNA extraction methods. Developing these methods will be important as dust storms are predicted to increase due to increased draughts and anthropogenic activity, and the fungal communities of these dust-storms are currently relatively understudied.
Tone mapping infrared images using conditional filtering-based multi-scale retinex
NASA Astrophysics Data System (ADS)
Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng
2015-10-01
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.
An occupational exposure assessment for engineered nanoparticles used in semiconductor fabrication.
Shepard, Michele Noble; Brenner, Sara
2014-03-01
Engineered nanoparticles of alumina, amorphous silica, and ceria are used in semiconductor device fabrication during wafer polishing steps referred to as 'chemical mechanical planarization' (CMP). Some metal oxide nanoparticles can impact the biological response of cells and organ systems and may cause adverse health effects; additional research is necessary to better understand potential risks from nanomaterial applications and occupational exposure scenarios. This study was conducted to assess potential airborne exposures to nanoparticles and agglomerates using direct-reading instruments and filter-based samples to characterize workplace aerosols by particle number, mass, size, composition, and morphology. Sampling was repeated for tasks in three work areas (fab, subfab, wastewater treatment) at a facility using engineered nanoparticles for CMP. Real-time measurements were collected using a condensation particle counter (CPC), optical particle counter, and scanning mobility particle spectrometer (SMPS). Filter-based samples were analyzed for total mass or the respirable fraction, and for specific metals of interest. Additional air sample filters were analyzed by transmission electron microscopy with energy dispersive x-ray spectroscopy (TEM/EDX) for elemental identification and to provide data on particle size, morphology, and concentration. Peak concentrations measured on the CPC ranged from 1 to 16 particles per cubic centimeter (P cm(-3)) for background and from 4 to 74 P cm(-3) during tasks sampled in the fab; from 1 to 60 P cm(-3) for background and from 3 to 84 P cm(-3) for tasks sampled in the subfab; and from 1160 to 45 894 P cm(-3) for background and from 1710 to 45 519 P cm(-3) during wastewater treatment system filter change tasks. Significant variability was seen among the repeated task measurements and among background comparisons in each area. Several data analysis methods were used to compare each set of task and background measurements. Increased concentrations of respirable particles were identified for some tasks sampled in each work area, although of relatively low magnitude and inconsistently among repeated measurements for specific tasks. Measurements with a portable SMPS indicated that nanoparticle number concentrations (channels 11.5-115.5nm) increased above background levels by 3.2 P cm(-3) during CMP tool set-up in the fab area but were not elevated when changing filters for the CMP wastewater treatment system. All results from mass concentration analysis were below the limits of detection. Characterization by TEM/EDX identified structures containing the elements of interest (Al, Si), primarily as agglomerates or aggregates in the 100-1000nm size range. Although health-based occupational exposure limits have not been established for nanoscale alumina, silica, or ceria, the measured concentrations by number and mass were below currently proposed benchmarks or reference values for poorly soluble low-toxicity nanoparticles.
Damarell, Raechel A; Tieman, Jennifer J; Sladek, Ruth M
2013-07-02
PubMed translations of OvidSP Medline search filters offer searchers improved ease of access. They may also facilitate access to PubMed's unique content, including citations for the most recently published biomedical evidence. Retrieving this content requires a search strategy comprising natural language terms ('textwords'), rather than Medical Subject Headings (MeSH). We describe a reproducible methodology that uses a validated PubMed search filter translation to create a textword-only strategy to extend retrieval to PubMed's unique heart failure literature. We translated an OvidSP Medline heart failure search filter for PubMed and established version equivalence in terms of indexed literature retrieval. The PubMed version was then run within PubMed to identify citations retrieved by the filter's MeSH terms (Heart failure, Left ventricular dysfunction, and Cardiomyopathy). It was then rerun with the same MeSH terms restricted to searching on title and abstract fields (i.e. as 'textwords'). Citations retrieved by the MeSH search but not the textword search were isolated. Frequency analysis of their titles/abstracts identified natural language alternatives for those MeSH terms that performed less effectively as textwords. These terms were tested in combination to determine the best performing search string for reclaiming this 'lost set'. This string, restricted to searching on PubMed's unique content, was then combined with the validated PubMed translation to extend the filter's performance in this database. The PubMed heart failure filter retrieved 6829 citations. Of these, 834 (12%) failed to be retrieved when MeSH terms were converted to textwords. Frequency analysis of the 834 citations identified five high frequency natural language alternatives that could improve retrieval of this set (cardiac failure, cardiac resynchronization, left ventricular systolic dysfunction, left ventricular diastolic dysfunction, and LV dysfunction). Together these terms reclaimed 157/834 (18.8%) of lost citations. MeSH terms facilitate precise searching in PubMed's indexed subset. They may, however, work less effectively as search terms prior to subject indexing. A validated PubMed search filter can be used to develop a supplementary textword-only search strategy to extend retrieval to PubMed's unique content. A PubMed heart failure search filter is available on the CareSearch website (http://www.caresearch.com.au) providing access to both indexed and non-indexed heart failure evidence.
Recent Advances on Endocrine Disrupting Effects of UV Filters.
Wang, Jiaying; Pan, Liumeng; Wu, Shenggan; Lu, Liping; Xu, Yiwen; Zhu, Yanye; Guo, Ming; Zhuang, Shulin
2016-08-03
Ultraviolet (UV) filters are used widely in cosmetics, plastics, adhesives and other industrial products to protect human skin or products against direct exposure to deleterious UV radiation. With growing usage and mis-disposition of UV filters, they currently represent a new class of contaminants of emerging concern with increasingly reported adverse effects to humans and other organisms. Exposure to UV filters induce various endocrine disrupting effects, as revealed by increasing number of toxicological studies performed in recent years. It is necessary to compile a systematic review on the current research status on endocrine disrupting effects of UV filters toward different organisms. We therefore summarized the recent advances on the evaluation of the potential endocrine disruptors and the mechanism of toxicity for many kinds of UV filters such as benzophenones, camphor derivatives and cinnamate derivatives.
NASA Astrophysics Data System (ADS)
Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine
2015-12-01
A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.
Maximizing noise energy for noise-masking studies.
Jules Étienne, Cédric; Arleo, Angelo; Allard, Rémy
2017-08-01
Noise-masking experiments are widely used to investigate visual functions. To be useful, noise generally needs to be strong enough to noticeably impair performance, but under some conditions, noise does not impair performance even when its contrast approaches the maximal displayable limit of 100 %. To extend the usefulness of noise-masking paradigms over a wider range of conditions, the present study developed a noise with great masking strength. There are two typical ways of increasing masking strength without exceeding the limited contrast range: use binary noise instead of Gaussian noise or filter out frequencies that are not relevant to the task (i.e., which can be removed without affecting performance). The present study combined these two approaches to further increase masking strength. We show that binarizing the noise after the filtering process substantially increases the energy at frequencies within the pass-band of the filter given equated total contrast ranges. A validation experiment showed that similar performances were obtained using binarized-filtered noise and filtered noise (given equated noise energy at the frequencies within the pass-band) suggesting that the binarization operation, which substantially reduced the contrast range, had no significant impact on performance. We conclude that binarized-filtered noise (and more generally, truncated-filtered noise) can substantially increase the energy of the noise at frequencies within the pass-band. Thus, given a limited contrast range, binarized-filtered noise can display higher energy levels than Gaussian noise and thereby widen the range of conditions over which noise-masking paradigms can be useful.
Evaluation of Plastic Media Blasting Equipment
1987-04-01
the differential pressure across the filter element or by a timer with a differential pressure switch override. The timer and the differential pressure ...automatic. The mechanism should be activated by the differential pressure across the filter element or by a timer with a differential pressure switch override...The timer and the differential pressure switch settings should be adjustable. The dust then falls to the bottom of the baghouse for
An overland flow sampler for use in vegetative filters
D. Eisenhauer; M. Helmers; J. Brothers; M. Dosskey; T. Franti; A. Boldt; B. Strahm
2002-01-01
Vegetative filters (VF) are used to remove contaminants from agricultural runoff and improve surface water quality. Techniques are needed to quantify the performance of VF in realistic field settings. The goal of this project was to develop and test a relatively simple and low cost method for sampling overland flow in a VF. The 0.3 m wide sampler has the capacity to...
USDA-ARS?s Scientific Manuscript database
Introduction: Zero-valent iron (ZVI) filters may provide an efficient method to mitigate the contamination of produce crops through irrigation water. Purpose: To evaluate the use of ZVI-filtration in decontaminating E. coli O157:H12 in irrigation water and on spinach plants in a small, field-scale...
DNAPL Dissolution in Bedrock Fractures And Fracture Networks
2011-06-01
were filtered through a 0.2 micron filter and then analyzed via ion chromatography ( Dionex DX-120, Sunnyvale, CA). An additional set of sorption...analyzed via ion chromatography ( Dionex DX-120, Sunnyvale, CA). The effluent pH was monitored periodically with pH test strips. Aqueous DHC...liquid EDTA ethylenediaminetetraacetic acid GC gas chromatograph HPLC high-performance liquid chromatography ISCO in situ chemical oxidation
Brüllmann, D D; d'Hoedt, B
2011-05-01
The aim of this study was to illustrate the influence of digital filters on the signal-to-noise ratio (SNR) and modulation transfer function (MTF) of digital images. The article will address image pre-processing that may be beneficial for the production of clinically useful digital radiographs with lower radiation dose. Three filters, an arithmetic mean filter, a median filter and a Gaussian filter (standard deviation (SD) = 0.4), with kernel sizes of 3 × 3 pixels and 5 × 5 pixels were tested. Synthetic images with exactly increasing amounts of Gaussian noise were created to gather linear regression of SNR before and after application of digital filters. Artificial stripe patterns with defined amounts of line pairs per millimetre were used to calculate MTF before and after the application of the digital filters. The Gaussian filter with a 5 × 5 kernel size caused the highest noise suppression (SNR increased from 2.22, measured in the synthetic image, to 11.31 in the filtered image). The smallest noise reduction was found with the 3 × 3 median filter. The application of the median filters resulted in no changes in MTF at the different resolutions but did result in the deletion of smaller structures. The 5 × 5 Gaussian filter and the 5 × 5 arithmetic mean filter showed the strongest changes of MTF. The application of digital filters can improve the SNR of a digital sensor; however, MTF can be adversely affected. As such, imaging systems should not be judged solely on their quoted spatial resolutions because pre-processing may influence image quality.
Oatts, Thomas J; Hicks, Cheryl E; Adams, Amy R; Brisson, Michael J; Youmans-McDonald, Linda D; Hoover, Mark D; Ashley, Kevin
2012-02-01
Occupational sampling and analysis for multiple elements is generally approached using various approved methods from authoritative government sources such as the National Institute for Occupational Safety and Health (NIOSH), the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), as well as consensus standards bodies such as ASTM International. The constituents of a sample can exist as unidentified compounds requiring sample preparation to be chosen appropriately, as in the case of beryllium in the form of beryllium oxide (BeO). An interlaboratory study was performed to collect analytical data from volunteer laboratories to examine the effectiveness of methods currently in use for preparation and analysis of samples containing calcined BeO powder. NIST SRM(®) 1877 high-fired BeO powder (1100 to 1200 °C calcining temperature; count median primary particle diameter 0.12 μm) was used to spike air filter media as a representative form of beryllium particulate matter present in workplace sampling that is known to be resistant to dissolution. The BeO powder standard reference material was gravimetrically prepared in a suspension and deposited onto 37 mm mixed cellulose ester air filters at five different levels between 0.5 μg and 25 μg of Be (as BeO). Sample sets consisting of five BeO-spiked filters (in duplicate) and two blank filters, for a total of twelve unique air filter samples per set, were submitted as blind samples to each of 27 participating laboratories. Participants were instructed to follow their current process for sample preparation and utilize their normal analytical methods for processing samples containing substances of this nature. Laboratories using more than one sample preparation and analysis method were provided with more than one sample set. Results from 34 data sets ultimately received from the 27 volunteer laboratories were subjected to applicable statistical analyses. The observed performance data show that sample preparations using nitric acid alone, or combinations of nitric and hydrochloric acids, are not effective for complete extraction of Be from the SRM 1877 refractory BeO particulate matter spiked on air filters; but that effective recovery can be achieved by using sample preparation procedures utilizing either sulfuric or hydrofluoric acid, or by using methodologies involving ammonium bifluoride with heating. Laboratories responsible for quantitative determination of Be in workplace samples that may contain high-fired BeO should use quality assurance schemes that include BeO-spiked sampling media, rather than solely media spiked with soluble Be compounds, and should ensure that methods capable of quantitative digestion of Be from the actual material present are used.
Noise exposure is increased with neonatal helmet CPAP in comparison with conventional nasal CPAP.
Trevisanuto, D; Camiletti, L; Doglioni, N; Cavallin, F; Udilano, A; Zanardo, V
2011-01-01
in adults, noninvasive ventilation via a helmet is associated with significantly greater noise than nasal and facial masks. We hypothesized that noise exposure could be increased with neonatal helmet continuous positive airway pressure (CPAP) in comparison with conventional nasal CPAP (nCPAP). Our primary objective was to compare the noise intensity produced by a neonatal helmet CPAP and a conventional nCPAP system. Furthermore, we aimed to evaluate the effect of the gas flow rate and the presence of the humidifier and the filter on noise levels during neonatal helmet CPAP treatment. in this bench study, noise intensity was measured in the following settings: helmet CPAP, nCPAP, incubator and the neonatal intensive care unit. In helmet CPAP, noise measurements were performed at different gas flow rates (8, 10 and 12 l/min), while in nCPAP, the flow rate was 8 l/min. For both CPAP systems, the level of pressure was maintained constant at 5 cmH(2) O. during neonatal helmet CPAP, the median (interquartile range) noise levels were significantly higher than those during nCPAP: 70.0 dB (69.9-70.4) vs. 62.7 dB (62.5-63.0); P<0.001. In the helmet CPAP, the noise intensities changed with increasing flow rate and with the presence of a humidifier or a filter. noise intensities generated by the neonatal helmet CPAP were significantly higher than those registered while using a conventional nCPAP system. In the helmet, the noise intensity depends on the gas flow rate, and the presence of a humidifier and a filter in the system. 2010 The Acta Anaesthesiologica Scandinavica Foundation.
Determination of spatially dependent diffusion parameters in bovine bone using Kalman filter.
Shokry, Abdallah; Ståhle, Per; Svensson, Ingrid
2015-11-07
Although many studies have been made for homogenous constant diffusion, bone is an inhomogeneous material. It has been suggested that bone porosity decreases from the inner boundaries to the outer boundaries of the long bones. The diffusivity of substances in the bone matrix is believed to increase as the bone porosity increases. In this study, an experimental set up is used where bovine bone samples, saturated with potassium chloride (KCl), were put into distilled water and the conductivity of the water was followed. Chloride ions in the bone samples escaped out in the water through diffusion and the increase of the conductivity was measured. A one-dimensional, spatially dependent mathematical model describing the diffusion process is used. The diffusion parameters in the model are determined using a Kalman filter technique. The parameters for spatially dependent at endosteal and periosteal surfaces are found to be (12.8 ± 4.7) × 10(-11) and (5 ± 3.5) × 10(-11)m(2)/s respectively. The mathematical model function using the obtained diffusion parameters fits very well with the experimental data with mean square error varies from 0.06 × 10(-6) to 0.183 × 10(-6) (μS/m)(2). Copyright © 2015 Elsevier Ltd. All rights reserved.
Chang, Herng-Hua; Chang, Yu-Ning
2017-04-01
Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.
Numerical studies on the performance of an aerosol respirator with faceseal leakage
NASA Astrophysics Data System (ADS)
Zaripov, S. K.; Mukhametzanov, I. T.; Grinshpun, S. A.
2016-11-01
We studied the efficiency of a facepiece filtering respirator (FFR) in presence of a measurable faceseal leakage using the previously developed model of a spherical sampler with porous layer. In our earlier study, the model was validated for a specific filter permeability value. In this follow-up study, we investigated the effect of permeability on the overall respirator performance accounting for the faceseal leakage. The Total Inward Leakage (TIL) was calculated as a function of the leakage-to-filter surface ratio and the particle diameter. A good correlation was found between the theoretical and experimental TIL values. The TIL value was shown to increase and the effect of particle size on TIL to decrease as the leakage-to- filter surface ratio grows. The model confirmed that within the most penetrating particle size range (∼50 nm) and at relatively low leakage-to-filter surface ratios, an FFR performs better (TIL is lower) when the filter has a lower permeability which should be anticipated as long as the flow through the filter represents the dominant particle penetration pathway. An increase in leak size causes the TIL to rise; furthermore, under certain leakage-to-filter surface ratios, TIL for ultrafine particles becomes essentially independent on the filter properties due to a greater contribution of the aerosol flow through the faceseal leakage. In contrast to the ultrafine fraction, the larger particles (e.g., 800 nm) entering a typical high- or medium-quality respirator filter are almost fully collected by the filter medium regardless of its permeability; at the same time, the fraction penetrated through the leakage appears to be permeability- dependent: higher permeability generally results in a lower pressure drop through the filter which increases the air flow through the filter at the expense of the leakage flow. The latter reduces the leakage effect thus improving the overall respiratory protection level. The findings of this study provide valuable information for developing new respirators with a predictable actual workplace protection factor.