Error minimizing algorithms for nearest eighbor classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Hush, Don; Zimmer, G. Beate
2011-01-03
Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. Wemore » use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.« less
Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor
2015-01-01
In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757
Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.
Milani, Ali A; Panahi, Issa M; Briggs, Richard
2007-01-01
Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.
Tunable electro-optic filter stack
Fontecchio, Adam K.; Shriyan, Sameet K.; Bellingham, Alyssa
2017-09-05
A holographic polymer dispersed liquid crystal (HPDLC) tunable filter exhibits switching times of no more than 20 microseconds. The HPDLC tunable filter can be utilized in a variety of applications. An HPDLC tunable filter stack can be utilized in a hyperspectral imaging system capable of spectrally multiplexing hyperspectral imaging data acquired while the hyperspectral imaging system is airborne. HPDLC tunable filter stacks can be utilized in high speed switchable optical shielding systems, for example as a coating for a visor or an aircraft canopy. These HPDLC tunable filter stacks can be fabricated using a spin coating apparatus and associated fabrication methods.
Method for monitoring stack gases for uranium activity
Beverly, C.R.; Ernstberger, E.G.
1985-07-03
A method for monitoring the stack gases of a purge cascade of gaseous diffusion plant for uranium activity. A sample stream is taken from the stack gases and contacted with a volume of moisture-laden air for converting trace levels of uranium hexafluoride, if any, in the stack gases into particulate uranyl fluoride. A continuous strip of filter paper from a supply roll is passed through this sampling stream to intercept and gather any uranyl fluoride in the sampling stream. This filter paper is then passed by an alpha scintillation counting device where any radioactivity on the filter paper is sensed so as to provide a continuous monitoring of the gas stream for activity indicative of the uranium content in the stack gases. 1 fig.
Method for monitoring stack gases for uranium activity
Beverly, Claude R.; Ernstberger, Harold G.
1988-01-01
A method for monitoring the stack gases of a purge cascade of a gaseous diffusion plant for uranium activity. A sample stream is taken from the stack gases and contacted with a volume of moisture-laden air for converting trace levels of uranium hexafluoride, if any, in the stack gases into particulate uranyl fluoride. A continuous strip of filter paper from a supply roll is passed through this sampling stream to intercept and gather any uranyl fluoride in the sampling stream. This filter paper is then passed by an alpha scintillation counting device where any radioactivity on the filter paper is sensed so as to provide a continuous monitoring of the gas stream for activity indicative of the uranium content in the stack gases.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Thermally controlled femtosecond pulse shaping using metasurface based optical filters
NASA Astrophysics Data System (ADS)
Rahimi, Eesa; Şendur, Kürşat
2018-02-01
Shaping of the temporal distribution of the ultrashort pulses, compensation of pulse deformations due to phase shift in transmission and amplification are of interest in various optical applications. To address these problems, in this study, we have demonstrated an ultra-thin reconfigurable localized surface plasmon (LSP) band-stop optical filter driven by insulator-metal phase transition of vanadium dioxide. A Joule heating mechanism is proposed to control the thermal phase transition of the material. The resulting permittivity variation of vanadium dioxide tailors spectral response of the transmitted pulse from the stack. Depending on how the pulse's spectrum is located with respect to the resonance of the band-stop filter, the thin film stack can dynamically compress/expand the output pulse span up to 20% or shift its phase up to 360°. Multi-stacked filters have shown the ability to dynamically compensate input carrier frequency shifts and pulse span variations besides their higher span expansion rates.
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh
2011-01-01
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232
Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur
2018-05-09
Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.
Boosting Contextual Information for Deep Neural Network Based Voice Activity Detection
2015-02-01
multi-resolution stacking (MRS), which is a stack of ensemble classifiers. Each classifier in a building block inputs the concatenation of the predictions ...a base classifier in MRS, named boosted deep neural network (bDNN). bDNN first generates multiple base predictions from different contexts of a single...frame by only one DNN and then aggregates the base predictions for a better prediction of the frame, and it is different from computationally
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing
2017-11-01
The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.
Designer Infrared Filters using Stacked Metal Lattices
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Rebbert, M.; Sternberg, O.
2003-01-01
We have designed and fabricated infrared filters for use at wavelengths greater than or equal to 15 microns. Unlike conventional dielectric filters used at the short wavelengths, ours are made from stacked metal grids, spaced at a very small fraction of the performance wavelengths. The individual lattice layers are gold, the spacers are polyimide, and they are assembled using integrated circuit processing techniques; they resemble some metallic photonic band-gap structures. We simulate the filter performance accurately, including the coupling of the propagating, near-field electromagnetic modes, using computer aided design codes. We find no anomalous absorption. The geometrical parameters of the grids are easily altered in practice, allowing for the production of tuned filters with predictable useful transmission characteristics. Although developed for astronomical instrumentation, the filters arc broadly applicable in systems across infrared and terahertz bands.
An exact algorithm for optimal MAE stack filter design.
Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior
2007-02-01
We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.
Code of Federal Regulations, 2012 CFR
2012-07-01
... analyses Acid filters Baghouse bags Clothing (e.g., coveralls, aprons, shoes, hats, gloves) Sweepings Air filter bags and cartridges Respiratory cartridge filters Shop abrasives Stacking boards Waste shipping... pallets Water treatment sludges, filter cakes, residues, and solids Emission control dusts, sludges...
Code of Federal Regulations, 2013 CFR
2013-07-01
... analyses Acid filters Baghouse bags Clothing (e.g., coveralls, aprons, shoes, hats, gloves) Sweepings Air filter bags and cartridges Respiratory cartridge filters Shop abrasives Stacking boards Waste shipping... pallets Water treatment sludges, filter cakes, residues, and solids Emission control dusts, sludges...
Code of Federal Regulations, 2014 CFR
2014-07-01
... analyses Acid filters Baghouse bags Clothing (e.g., coveralls, aprons, shoes, hats, gloves) Sweepings Air filter bags and cartridges Respiratory cartridge filters Shop abrasives Stacking boards Waste shipping... pallets Water treatment sludges, filter cakes, residues, and solids Emission control dusts, sludges...
Code of Federal Regulations, 2011 CFR
2011-07-01
... analyses Acid filters Baghouse bags Clothing (e.g., coveralls, aprons, shoes, hats, gloves) Sweepings Air filter bags and cartridges Respiratory cartridge filters Shop abrasives Stacking boards Waste shipping... pallets Water treatment sludges, filter cakes, residues, and solids Emission control dusts, sludges...
Code of Federal Regulations, 2010 CFR
2010-07-01
... analyses Acid filters Baghouse bags Clothing (e.g., coveralls, aprons, shoes, hats, gloves) Sweepings Air filter bags and cartridges Respiratory cartridge filters Shop abrasives Stacking boards Waste shipping... pallets Water treatment sludges, filter cakes, residues, and solids Emission control dusts, sludges...
A tunable electrochromic fabry-perot filter for adaptive optics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaich, Jonathan David; Kammler, Daniel R.; Ambrosini, Andrea
2006-10-01
The potential for electrochromic (EC) materials to be incorporated into a Fabry-Perot (FP) filter to allow modest amounts of tuning was evaluated by both experimental methods and modeling. A combination of chemical vapor deposition (CVD), physical vapor deposition (PVD), and electrochemical methods was used to produce an ECFP film stack consisting of an EC WO{sub 3}/Ta{sub 2}O{sub 5}/NiO{sub x}H{sub y} film stack (with indium-tin-oxide electrodes) sandwiched between two Si{sub 3}N{sub 4}/SiO{sub 2} dielectric reflector stacks. A process to produce a NiO{sub x}H{sub y} charge storage layer that freed the EC stack from dependence on atmospheric humidity and allowed construction ofmore » this complex EC-FP stack was developed. The refractive index (n) and extinction coefficient (k) for each layer in the EC-FP film stack was measured between 300 and 1700 nm. A prototype EC-FP filter was produced that had a transmission at 500 nm of 36%, and a FWHM of 10 nm. A general modeling approach that takes into account the desired pass band location, pass band width, required transmission and EC optical constants in order to estimate the maximum tuning from an EC-FP filter was developed. Modeling shows that minor thickness changes in the prototype stack developed in this project should yield a filter with a transmission at 600 nm of 33% and a FWHM of 9.6 nm, which could be tuned to 598 nm with a FWHM of 12.1 nm and a transmission of 16%. Additional modeling shows that if the EC WO{sub 3} absorption centers were optimized, then a shift from 600 nm to 598 nm could be made with a FWHM of 11.3 nm and a transmission of 20%. If (at 600 nm) the FWHM is decreased to 1 nm and transmission maintained at a reasonable level (e.g. 30%), only fractions of a nm of tuning would be possible with the film stack considered in this study. These tradeoffs may improve at other wavelengths or with EC materials different than those considered here. Finally, based on our limited investigation and material set, the severe absorption associated with the refractive index change suggests that incorporating EC materials into phase correcting spatial light modulators (SLMS) would allow for only negligible phase correction before transmission losses became too severe. However, we would like to emphasize that other EC materials may allow sufficient phase correction with limited absorption, which could make this approach attractive.« less
Properties of multilayer filters
NASA Technical Reports Server (NTRS)
Baumeister, P. W.
1973-01-01
New methods were investigated of using optical interference coatings to produce bandpass filters for the spectral region 110 nm to 200 nm. The types of filter are: triple cavity metal dielectric filters; all dielectric reflection filters; and all dielectric Fabry Perot type filters. The latter two types use thorium fluoride and either cryolite films or magnesium fluoride films in the stacks. The optical properties of the thorium fluoride were also measured.
Fabrication of artificially stacked ultrathin ZnS/MgF2 multilayer dielectric optical filters.
Kedawat, Garima; Srivastava, Subodh; Jain, Vipin Kumar; Kumar, Pawan; Kataria, Vanjula; Agrawal, Yogyata; Gupta, Bipin Kumar; Vijay, Yogesh K
2013-06-12
We report a design and fabrication strategy for creating an artificially stacked multilayered optical filters using a thermal evaporation technique. We have selectively chosen a zinc sulphide (ZnS) lattice for the high refractive index (n = 2.35) layer and a magnesium fluoride (MgF2) lattice as the low refractive index (n = 1.38) layer. Furthermore, the microstructures of the ZnS/MgF2 multilayer films are also investigated through TEM and HRTEM imaging. The fabricated filters consist of high and low refractive 7 and 13 alternating layers, which exhibit a reflectance of 89.60% and 99%, respectively. The optical microcavity achieved an average transmittance of 85.13% within the visible range. The obtained results suggest that these filters could be an exceptional choice for next-generation antireflection coatings, high-reflection mirrors, and polarized interference filters.
Filters for Submillimeter Electromagnetic Waves
NASA Technical Reports Server (NTRS)
Berdahl, C. M.
1986-01-01
New manufacturing process produces filters strong, yet have small, precise dimensions and smooth surface finish essential for dichroic filtering at submillimeter wavelengths. Many filters, each one essentially wafer containing fine metal grid made at same time. Stacked square wires plated, fused, and etched to form arrays of holes. Grid of nickel and tin held in brass ring. Wall thickness, thickness of filter (hole depth) and lateral hole dimensions all depend upon operating frequency and filter characteristics.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
40 CFR 52.2276 - Control strategy and regulations: Particulate matter.
Code of Federal Regulations, 2010 CFR
2010-07-01
... its limestone quarry facilities near New Braunfels, Comal County, Texas shall install fabric filters... of the fabric filters, Parker Brothers and Co., Inc., shall not emit particulate matter in excess of 0.03 grains per standard cubic foot from the exhaust stack of the fabric filter on its primary...
40 CFR 52.2276 - Control strategy and regulations: Particulate matter.
Code of Federal Regulations, 2013 CFR
2013-07-01
... its limestone quarry facilities near New Braunfels, Comal County, Texas shall install fabric filters... of the fabric filters, Parker Brothers and Co., Inc., shall not emit particulate matter in excess of 0.03 grains per standard cubic foot from the exhaust stack of the fabric filter on its primary...
40 CFR 52.2276 - Control strategy and regulations: Particulate matter.
Code of Federal Regulations, 2012 CFR
2012-07-01
... its limestone quarry facilities near New Braunfels, Comal County, Texas shall install fabric filters... of the fabric filters, Parker Brothers and Co., Inc., shall not emit particulate matter in excess of 0.03 grains per standard cubic foot from the exhaust stack of the fabric filter on its primary...
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-06-29
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elder, J.C.; Littlefield, L.G.; Tillery, M.I.
1978-06-01
A preliminary design of a prototype particulate stack sampler (PPSS) has been prepared, and development of several components is under way. The objective of this Environmental Protection Agency (EPA)-sponsored program is to develop and demonstrate a prototype sampler with capabilities similar to EPA Method 5 apparatus but without some of the more troublesome aspects. Features of the new design include higher sampling flow; display (on demand) of all variables and periodic calculation of percent isokinetic, sample volume, and stack velocity; automatic control of probe and filter heaters; stainless steel surfaces in contact with the sample stream; single-point particle size separationmore » in the probe nozzle; null-probe capability in the nozzle; and lower weight in the components of the sampling train. Design considerations will limit use of the PPSS to stack gas temperatures under approximately 300/sup 0/C, which will exclude sampling some high-temperature stacks such as incinerators. Although need for filter weighing has not been eliminated in the new design, introduction of a variable-slit virtual impactor nozzle may eliminate the need for mass analysis of particles washed from the probe. Component development has shown some promise for continuous humidity measurement by an in-line wet-bulb, dry-bulb psychrometer.« less
NASA Astrophysics Data System (ADS)
Chang, Chun-Chieh; Huang, Li; Nogan, John; Chen, Hou-Tong
2018-05-01
We experimentally demonstrate high-performance narrowband terahertz (THz) bandpass filters through cascading multiple bilayer metasurface antireflection structures. Each bilayer metasurface, consisting of a square array of silicon pillars with a self-aligned top gold resonator-array and a complementary bottom gold slot-array, enables near-zero reflection and simultaneously close-to-unity single-band transmission at designed operational frequencies in the THz spectral region. The THz bandpass filters based on stacked bilayer metasurfaces allow a fairly narrow, high-transmission passband, and a fast roll-off to an extremely clean background outside the passband, thereby providing superior bandpass performance. The demonstrated scheme of narrowband THz bandpass filtering is of great importance for a variety of applications where spectrally clean, high THz transmission over a narrow bandwidth is desired, such as THz spectroscopy and imaging, molecular detection and monitoring, security screening, and THz wireless communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chun-Chieh; Huang, Li; Nogan, John
We experimentally demonstrate high-performance narrowband terahertz (THz) bandpass filters through cascading multiple bilayer metasurface antireflection structures. Each bilayer metasurface, consisting of a square array of silicon pillars with a self-aligned top gold resonator-array and a complementary bottom gold slot-array, enables near-zero reflection and simultaneously close-to-unity single-band transmission at designed operational frequencies in the THz spectral region. The THz bandpass filters based on stacked bilayer metasurfaces allow a fairly narrow, high-transmission passband, and a fast roll-off to an extremely clean background outside the passband, thereby providing superior bandpass performance. The demonstrated scheme of narrowband THz bandpass filtering is of great importancemore » for a variety of applications where spectrally clean, high THz transmission over a narrow bandwidth is desired, such as THz spectroscopy and imaging, molecular detection and monitoring, security screening, and THz wireless communications.« less
Chang, Chun-Chieh; Huang, Li; Nogan, John; ...
2018-02-01
We experimentally demonstrate high-performance narrowband terahertz (THz) bandpass filters through cascading multiple bilayer metasurface antireflection structures. Each bilayer metasurface, consisting of a square array of silicon pillars with a self-aligned top gold resonator-array and a complementary bottom gold slot-array, enables near-zero reflection and simultaneously close-to-unity single-band transmission at designed operational frequencies in the THz spectral region. The THz bandpass filters based on stacked bilayer metasurfaces allow a fairly narrow, high-transmission passband, and a fast roll-off to an extremely clean background outside the passband, thereby providing superior bandpass performance. The demonstrated scheme of narrowband THz bandpass filtering is of great importancemore » for a variety of applications where spectrally clean, high THz transmission over a narrow bandwidth is desired, such as THz spectroscopy and imaging, molecular detection and monitoring, security screening, and THz wireless communications.« less
Recognition of emotions using multimodal physiological signals and an ensemble deep learning model.
Yin, Zhong; Zhao, Mengyuan; Wang, Yongxiong; Yang, Jingdong; Zhang, Jianhua
2017-03-01
Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Stack of Yang-Mills Fields on Lorentzian Manifolds
NASA Astrophysics Data System (ADS)
Benini, Marco; Schenkel, Alexander; Schreiber, Urs
2018-03-01
We provide an abstract definition and an explicit construction of the stack of non-Abelian Yang-Mills fields on globally hyperbolic Lorentzian manifolds. We also formulate a stacky version of the Yang-Mills Cauchy problem and show that its well-posedness is equivalent to a whole family of parametrized PDE problems. Our work is based on the homotopy theoretical approach to stacks proposed in Hollander (Isr. J. Math. 163:93-124, 2008), which we shall extend by further constructions that are relevant for our purposes. In particular, we will clarify the concretification of mapping stacks to classifying stacks such as BG con.
NASA Astrophysics Data System (ADS)
Pu, Yunti; Ma, Ping; Lv, Liang; Zhang, Mingxiao; Lu, Zhongwen; Qiao, Zhao; Qiu, Fuming
2018-05-01
Ta2O5-SiO2 quasi-rugate filters with a reasonable optimization of rugate notch filter design were prepared by ion-beam sputtering. The optical properties and laser-induced damage threshold are studied. Compared with the spectrum of HL-stacks, the spectrum of quasi-rugate filters have weaker second harmonic peaks and narrower stopbands. According to the effect of functionally graded layers (FGLs), 1-on-1 and S-on-1 Laser induced damage threshold (LIDT) of quasi-rugate filters are about 22% and 50% higher than those of HL stacks, respectively. Through the analysis of the damage morphologies, laser-induced damage of films under nanosecond multi-pulse are dominated by a combination of thermal shock stress and thermomechanical instability due to nodules. Compared with catastrophic damages, the damage sits of quasi-rugate filters are developed in a moderate way. The damage growth behavior of defect-induced damage sites have been effectively restrained by the structure of FGLs. Generally, FGLs are used to reduce thermal stress by the similar thermal-expansion coefficients of neighboring layers and solve the problems such as instability and cracking raised by the interface discontinuity of nodular boundaries, respectively.
Finding Kuiper Belt Objects Below the Detection Limit
NASA Astrophysics Data System (ADS)
Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew
2018-01-01
We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.
A mass filter based on an accelerating traveling wave.
Wiedenbeck, Michael; Kasemset, Bodin; Kasper, Manfred
2008-01-01
We describe a novel mass filtering concept based on the acceleration of a pulsed ion beam through a stack of electrostatic plates. A precisely controlled traveling wave generated within such an ion guide will induce a mass-selective ion acceleration, with mass separation ultimately accomplished via a simple energy-filtering system. Crucial for successful filtering is that the velocity with which the traveling wave passes through the ion guide must be dynamically controlled in order to accommodate the acceleration of the target ion species. Mass selection is determined by the velocity and acceleration with which the wave traverses the ion guide, whereby the target species will acquire a higher kinetic energy than all other lighter as well as heaver species. Finite element simulations of this design demonstrate that for small masses a mass resolution M/DeltaM approximately 1000 can be achieved within an electrode stack containing as few as 20 plates. Some of the possible advantages and drawbacks which distinguish this concept from established mass spectrometric technologies are discussed.
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-01-01
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703
Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal
NASA Astrophysics Data System (ADS)
Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling
2018-05-01
When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.
Ali, Safdar; Majid, Abdul
2015-04-01
The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.
Segmentation and analysis of mouse pituitary cells with graphic user interface (GUI)
NASA Astrophysics Data System (ADS)
González, Erika; Medina, Lucía.; Hautefeuille, Mathieu; Fiordelisio, Tatiana
2018-02-01
In this work we present a method to perform pituitary cell segmentation in image stacks acquired by fluorescence microscopy from pituitary slice preparations. Although there exist many procedures developed to achieve cell segmentation tasks, they are generally based on the edge detection and require high resolution images. However in the biological preparations that we worked on, the cells are not well defined as experts identify their intracellular calcium activity due to fluorescence intensity changes in different regions over time. This intensity changes were associated with time series over regions, and because they present a particular behavior they were used into a classification procedure in order to perform cell segmentation. Two logistic regression classifiers were implemented for the time series classification task using as features the area under the curve and skewness in the first classifier and skewness and kurtosis in the second classifier. Once we have found both decision boundaries in two different feature spaces by training using 120 time series, the decision boundaries were tested over 12 image stacks through a python graphical user interface (GUI), generating binary images where white pixels correspond to cells and the black ones to background. Results show that area-skewness classifier reduces the time an expert dedicates in locating cells by up to 75% in some stacks versus a 92% for the kurtosis-skewness classifier, this evaluated on the number of regions the method found. Due to the promising results, we expect that this method will be improved adding more relevant features to the classifier.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
Stacked, Filtered Multi-Channel X-Ray Diode Array
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacNeil, Lawrence P.; Dutra, Eric C.; Raphaelian, Mark
2015-08-01
This system meets the need for a low-cost, robust X-ray diode array to use for experiments in hostile environments on multiple platforms, and for experiments utilizing forces that may destroy the diode(s). Since these uses require a small size with a minimal single line-of-sight, a parallel array often cannot be used. So a stacked, filtered multi-channel X-ray diode array was developed that was called the MiniXRD. The design was modeled, built, and tested at National Security Technologies, LLC (NSTec) Livermore Operations (LO) to determine fundamental characteristics. Then, several different systems were fielded as ancillary “ridealong” diagnostics at several national facilitiesmore » to allow us to iteratively improve the design and usability. Presented here are design considerations and experimental results. This filtered diode array is currently at Technical Readiness Level (TRL) 6.« less
searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About , Steven; et. al.; May 3, 1988 An ion energy filter of the type useful in connection with secondary ion mass spectrometry is disclosed. The filter is composed of a stack of 20 thin metal plates, each plate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahle, J.J.; Buettner, L.C.; Mauer, S.
A series of experimental results are reported for breakthrough of the agent simulants DMMP and DIMP on coconut carbon. This adsorbent is used in filters for the Chemical Demiliterization program. The conditions were appropriate for a post treatment stack gas filter. Results indicate that high capacity and long filtration times are achievable under moderate humidity conditions up to 180 degrees F.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonanno, Giorgio, E-mail: buonanno@unicas.it; Stabile, Luca; Avino, Pasquale
2011-11-15
Highlights: > Particle size distributions and total concentrations measurement at the stack and before the fabric filter of an incinerator. > Chemical characterization of UFPs in terms of heavy metal concentration through a nuclear method. > Mineralogical investigation through a Transmission Electron Microscope equipped with an Energy Dispersive Spectrometer. > Heavy metal concentrations on UFPs as function of the boiling temperature. > Different mineralogical and morphological composition amongst samples collected before the fabric filter and at the stack. - Abstract: Waste combustion processes are responsible of particles and gaseous emissions. Referring to the particle emission, in the last years specificmore » attention was paid to ultrafine particles (UFPs, diameter less than 0.1 {mu}m), mainly emitted by combustion processes. In fact, recent findings of toxicological and epidemiological studies indicate that fine and ultrafine particles could represent a risk for health and environment. Therefore, it is necessary to quantify particle emissions from incinerators also to perform an exposure assessment for the human populations living in their surrounding areas. To these purposes, in the present work an experimental campaign aimed to monitor UFPs was carried out at the incineration plant in San Vittore del Lazio (Italy). Particle size distributions and total concentrations were measured both at the stack and before the fabric filter inlet in order to evaluate the removal efficiency of the filter in terms of UFPs. A chemical characterization of UFPs in terms of heavy metal concentration was performed through a nuclear method, i.e. Instrumental Neutron Activation Analysis (INAA), as well as a mineralogical investigation was carried out through a Transmission Electron Microscope (TEM) equipped with an Energy Dispersive Spectrometer (EDS) in order to evaluate shape, crystalline state and mineral compound of sampled particles. Maximum values of 2.7 x 10{sup 7} part. cm{sup -3} and 2.0 x 10{sup 3} part. cm{sup -3} were found, respectively, for number concentration before and after the fabric filter showing a very high efficiency in particle removing by the fabric filter. With regard to heavy metal concentrations, the elements with higher boiling temperature present higher concentrations at lower diameters showing a not complete evaporation in the combustion section and the consequent condensation of semi-volatile compounds on solid nuclei. In terms of mineralogical and morphological analysis, the most abundant compounds found in samples collected before the fabric filter are Na-K-Pb oxides followed by phyllosilicates, otherwise, different oxides of comparable abundance were detected in the samples collected at the stack.« less
40 CFR Appendix M to Part 51 - Recommended Test Methods for State Implementation Plans
Code of Federal Regulations, 2010 CFR
2010-07-01
... different temperature within 60 °C (108 °F) of the temperature at which the cyclone is to be used and... property sized and shaped for cleaning the nozzle, cyclone, filter holder, and probe or probe liner, with... stack temperatures from 38 to 260 °C (100 to 500 °F) and stack moisture up to 50 percent. Also, the...
Multispectral Digital Image Analysis of Varved Sediments in Thin Sections
NASA Astrophysics Data System (ADS)
Jäger, K.; Rein, B.; Dietrich, S.
2006-12-01
An update of the recently developed method COMPONENTS (Rein, 2003, Rein & Jäger, subm.) for the discrimination of sediment components in thin sections is presented here. COMPONENTS uses a 6-band (multispectral) image analysis. To derive six-band spectral information of the sediments, thin sections are scanned with a digital camera mounted on a polarizing microscope. The thin sections are scanned twice, under polarized and under unpolarized plain light. During each run RGB images are acquired which are subsequently stacked to a six-band file. The first three bands (Blue=1, Green=2, Red=3) result from the spectral behaviour in the blue, green and red band with unpolarized light conditions, and the bands 4 to 6 (Blue=4, Green=5, Red=6) from the polarized light run. The next step is the discrimination of the sediment components by their transmission behaviour. Automatic classification algorithms broadly used in remote sensing applications cannot be used due to unavoidable variations of sediment particle or thin section thicknesses that change absolute grey values of the sediment components. Thus, we use an approach based on band ratios, also known as indices. By using band ratios, the grey values measured in different bands are normalized against each other and illumination variations (e.g. thickness variations) are eliminated. By combining specific ratios we are able to detect all seven major components in the investigated sediments (carbonates, diatoms, fine clastic material, plant rests, pyrite, quartz and resin). Then, the classification results (compositional maps) are validated. Although the automatic classification and the analogous classification show high concordances, some systematic errors could be identified. For example, the transition zone between the sediment and resin filled cracks is classified as fine clastic material and very coarse carbonates are partly classified as quartz because coarse carbonates can be very bright and spectra are partly saturated (grey value 255). With reduced illumination intensity "carbonate image pixels" get unsaturated and can be well distinguished from quartz grains. During the evaluation process we identify all falsely classified areas using neighbourhood matrices and reclassify them. Finally, we use filter techniques to derive downcore component frequencies from the classified thin section images for variable thick virtual samples. The filter conducts neighbourhood analyses. After filtering, each pixel of the filtered images carries the information about the frequency of any given component in a defined neighbourhood around (virtual sampling). References Rein, B. (2003) In-situ Reflektionsspektroskopie und digitale Bildanalyse Gewinnung hochauflösender Paläoumweltdaten mit fernerkundlichen Methoden, Habilitation Thesis, Univ. Mainz, 104 p. Jäger, K. and Rein, B. (2005): Identifying varve components using digital image analysis techniques. - in: Heidi Haas, Karl Ramseyer & Fritz Schlunegger (eds.): Sediment 2005 (18th-20th July 2005), Schriftenreihe der Deutschen Gesellschaft für Geowissenschaften, 38, p. 81 Rein, B. and Jäger, K. (subm.) COMPONENTS - Sediment component detection in thin sections by multispectral digital image analysis. Sedimentology.
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
Detection of long duration cloud contamination in hyper-temporal NDVI imagery
NASA Astrophysics Data System (ADS)
Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.
2012-04-01
NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.
Boiler Briquette Coal versus Raw Coal: Part I-Stack Gas Emissions.
Ge, Su; Bai, Zhipeng; Liu, Weili; Zhu, Tan; Wang, Tongjian; Qing, Sheng; Zhang, Junfeng
2001-04-01
Stack gas emissions were characterized for a steam-generating boiler commonly used in China. The boiler was tested when fired with a newly formulated boiler briquette coal (BB-coal) and when fired with conventional raw coal (R-coal). The stack gas emissions were analyzed to determine emission rates and emission factors and to develop chemical source profiles. A dilution source sampling system was used to collect PM on both Teflon membrane filters and quartz fiber filters. The Teflon filters were analyzed gravimetrically for PM 10 and PM 2.5 mass concentrations and by X-ray fluorescence (XRF) for trace elements. The quartz fiber filters were analyzed for organic carbon (OC) and elemental carbon (EC) using a thermal/optical reflectance technique. Sulfur dioxide was measured using the standard wet chemistry method. Carbon monoxide was measured using an Orsat combustion analyzer. The emission rates of the R-coal combustion (in kg/hr), determined using the measured stack gas concentrations and the stack gas emission rates, were 0.74 for PM 10 , 0.38 for PM 25 , 20.7 for SO 2 , and 6.8 for CO, while those of the BB-coal combustion were 0.95 for PM 10 , 0.30 for PM 2 5 , 7.5 for SO 2 , and 5.3 for CO. The fuel-mass-based emission factors (in g/kg) of the R-coal, determined using the emission rates and the fuel burn rates, were 1.68 for PM 10 , 0.87 for PM 25 , 46.7 for SO 2 , and 15 for CO, while those of the BB-coal were 2.51 for PM 10 , 0.79 for PM 2.5 , 19.9 for SO 2 , and 14 for CO. The task-based emission factors (in g/ton steam generated) of the R-coal, determined using the fuel-mass-based emission factors and the coal/ steam conversion factors, were 0.23 for PM 10 , 0.12 for PM 2.5 , 6.4 for SO 2 , and 2.0 for CO, while those of the BB-coal were 0.30 for PM 10 , 0.094 for PM 2.5 , 2.4 for SO 2 , and 1.7 for CO. PM 10 and PM 2.5 elemental compositions are also presented for both types of coal tested in the study.
Boiler briquette coal versus raw coal: Part I--Stack gas emissions.
Ge, S; Bai, Z; Liu, W; Zhu, T; Wang, T; Qing, S; Zhang, J
2001-04-01
Stack gas emissions were characterized for a steam-generating boiler commonly used in China. The boiler was tested when fired with a newly formulated boiler briquette coal (BB-coal) and when fired with conventional raw coal (R-coal). The stack gas emissions were analyzed to determine emission rates and emission factors and to develop chemical source profiles. A dilution source sampling system was used to collect PM on both Teflon membrane filters and quartz fiber filters. The Teflon filters were analyzed gravimetrically for PM10 and PM2.5 mass concentrations and by X-ray fluorescence (XRF) for trace elements. The quartz fiber filters were analyzed for organic carbon (OC) and elemental carbon (EC) using a thermal/optical reflectance technique. Sulfur dioxide was measured using the standard wet chemistry method. Carbon monoxide was measured using an Orsat combustion analyzer. The emission rates of the R-coal combustion (in kg/hr), determined using the measured stack gas concentrations and the stack gas emission rates, were 0.74 for PM10, 0.38 for PM2.5, 20.7 for SO2, and 6.8 for CO, while those of the BB-coal combustion were 0.95 for PM10, 0.30 for PM2.5, 7.5 for SO2, and 5.3 for CO. The fuel-mass-based emission factors (in g/kg) of the R-coal, determined using the emission rates and the fuel burn rates, were 1.68 for PM10, 0.87 for PM2.5, 46.7 for SO2, and 15 for CO, while those of the BB-coal were 2.51 for PM10, 0.79 for PM2.5, 19.9 for SO2, and 14 for CO. The task-based emission factors (in g/ton steam generated) of the R-coal, determined using the fuel-mass-based emission factors and the coal/steam conversion factors, were 0.23 for PM10, 0.12 for PM2.5, 6.4 for SO2, and 2.0 for CO, while those of the BB-coal were 0.30 for PM10, 0.094 for PM2.5, 2.4 for SO2, and 1.7 for CO. PM10 and PM2.5 elemental compositions are also presented for both types of coal tested in the study.
Rapid spontaneous Raman light sheet microscopy using cw-lasers and tunable filters
Rocha-Mendoza, Israel; Licea-Rodriguez, Jacob; Marro, Mónica; Olarte, Omar E.; Plata-Sanchez, Marcos; Loza-Alvarez, Pablo
2015-01-01
We perform rapid spontaneous Raman 2D imaging in light-sheet microscopy using continuous wave lasers and interferometric tunable filters. By angularly tuning the filter, the cut-on/off edge transitions are scanned along the excited Stokes wavelengths. This allows obtaining cumulative intensity profiles of the scanned vibrational bands, which are recorded on image stacks; resembling a spectral version of the knife-edge technique to measure intensity profiles. A further differentiation of the stack retrieves the Raman spectra at each pixel of the image which inherits the 3D resolution of the host light sheet system. We demonstrate this technique using solvent solutions and composites of polystyrene beads and lipid droplets immersed in agar and by imaging the C–H (2800-3100cm−1) region in a C. elegans worm. The image acquisition time results in 4 orders of magnitude faster than confocal point scanning Raman systems, allowing the possibility of performing fast spontaneous Raman·3D-imaging on biological samples. PMID:26417514
Pixelated coatings and advanced IR coatings
NASA Astrophysics Data System (ADS)
Pradal, Fabien; Portier, Benjamin; Oussalah, Meihdi; Leplan, Hervé
2017-09-01
Reosc developed pixelated infrared coatings on detector. Reosc manufactured thick pixelated multilayer stacks on IR-focal plane arrays for bi-spectral imaging systems, demonstrating high filter performance, low crosstalk, and no deterioration of the device sensitivities. More recently, a 5-pixel filter matrix was designed and fabricated. Recent developments in pixelated coatings, shows that high performance infrared filters can be coated directly on detector for multispectral imaging. Next generation space instrument can benefit from this technology to reduce their weight and consumptions.
User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface
NASA Astrophysics Data System (ADS)
Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.
2015-09-01
While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.
Raposo, Letícia M; Nobre, Flavio F
2017-08-30
Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.
NASA Astrophysics Data System (ADS)
Jemberie, A.; Dugda, M. T.; Reusch, D.; Nyblade, A.
2006-12-01
Neural networks are decision making mathematical/engineering tools, which if trained properly, can do jobs automatically (and objectively) that normally require particular expertise and/or tedious repetition. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of quality control (QC) and Event Identification (EI) on seismic datasets. We explore to apply the multiplayer Feed Forward (FF) Artificial Neural Networks (ANN) and Self- Organizing Maps (SOM) in combination with Hk stacking of receiver functions in an attempt to test the extent of the usefulness of automatic classification of receiver functions for crustal parameter determination. Feed- forward ANNs (FFNNs) are a supervised classification tool while self-organizing maps (SOMs) are able to provide unsupervised classification of large, complex geophysical data sets into a fixed number of distinct generalized patterns or modes. Hk stacking is a methodology that is used to stack receiver functions based on the relative arrival times of P-to-S converted phase and next two reverberations to determine crustal thickness H and Vp-to-Vs ratio (k). We use receiver functions from teleseismic events recorded by the 2000- 2002 Ethiopia Broadband Seismic Experiment. Preliminary results of applying FFNN neural network and Hk stacking of receiver functions for automatic receiver functions classification as a step towards an effort of automatic crustal parameter determination look encouraging. After training a FFNN neural network, the network could classify the best receiver functions from bad ones with a success rate of about 75 to 95%. Applying H? stacking on the receiver functions classified by this FFNN as the best receiver functions, we could obtain crustal thickness and Vp/Vs ratio of 31±4 km and 1.75±0.05, respectively, for the crust beneath station ARBA in the Main Ethiopian Rift. To make comparison, we applied Hk stacking on the receiver functions which we ourselves classified as the best set and found that the crustal thickness and Vp/Vs ratio are 31±2 km and 1.75±0.02, respectively.
[A Double Split Ring Terahertz Filter on Ploymide Substrate].
He, Jun; Zhang, Tie-jun; Xiong, Wei; Zhang, Bo; He, Ting; Shen, Jing-ling
2015-11-01
Metamaterials are artificial composites that acquire their electromagnetic properties from embeded subwavelength metalic structure. With proper design of metamaterials, numerrous intriguing phenomena that not exhibited naturally can be realized, such as invisible cloaking, perfect absorption, negative refractive index and so on. In recent years, With the development of THz technology, the extensive research onTHz metamaterials devices areattracting more and more attentions. Since silicon (Si) has a higher transmittance for THz wave, it is usually selected as substrate in metamaterials structure. However, Si has the shortcomings of hardness, not easy to bend, and fragile, which limit the application of THz metamaterials. In this work, we use polyimide as the substrate to overcome the shortcomings of the Si substrate. Polyimide is flexible, smooth, suitable for conventional lithography process and the THz transmittance can compete with that of the Si. Frist, we test the THz optical properties of polymide, and get the refractive index of 1.9, and the transmittance of 80%. Second, we design a double splits ring resonators (DSRRs), and study the properties of transmission by changing the THz incidence angle and curvature of the sample. We find the resonant amplitude and resonant frequencies are unchanged. Fabricating metamaterials structures on a thin plastic substrate is a possible way to extend plane surface filtering to curved surface filtering. Third, we try to make a broadband filter by stacking two samples, and the 181GHz bandwidth at 50% has been achieved. By stacking several plane plastic metamaterial layers with different resonance responses into a multi-layer structure, a broadband THz filter can be built. The broadband filter has the advantages of simple manufacture, obvious filtering effect, which provides a new idea for the production of terahertz band filter.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10(-6). Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10−6. Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful. PMID:22195129
Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen
2012-09-01
We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.
Vision-based posture recognition using an ensemble classifier and a vote filter
NASA Astrophysics Data System (ADS)
Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun
2016-10-01
Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.
NASA Astrophysics Data System (ADS)
Liu, Weiqiang; Chen, Rujun; Cai, Hongzhu; Luo, Weibin
2016-12-01
In this paper, we investigated the robust processing of noisy spread spectrum induced polarization (SSIP) data. SSIP is a new frequency domain induced polarization method that transmits pseudo-random m-sequence as source current where m-sequence is a broadband signal. The potential information at multiple frequencies can be obtained through measurement. Removing the noise is a crucial problem for SSIP data processing. Considering that if the ordinary mean stack and digital filter are not capable of reducing the impulse noise effectively in SSIP data processing, the impact of impulse noise will remain in the complex resistivity spectrum that will affect the interpretation of profile anomalies. We implemented a robust statistical method to SSIP data processing. The robust least-squares regression is used to fit and remove the linear trend from the original data before stacking. The robust M estimate is used to stack the data of all periods. The robust smooth filter is used to suppress the residual noise for data after stacking. For robust statistical scheme, the most appropriate influence function and iterative algorithm are chosen by testing the simulated data to suppress the outliers' influence. We tested the benefits of the robust SSIP data processing using examples of SSIP data recorded in a test site beside a mine in Gansu province, China.
The Pan-STARRS pipeline and data products
NASA Astrophysics Data System (ADS)
Flewelling, Heather
2018-01-01
I will give a brief overview of the pipeline, database, and dataproducts for Pan-STARRS1 data release 1 (DR1) and data release 2 (DR2). DR1 and DR2 provides access to data from the Pan-STARRS1 3pi survey, a survey which covers ¾ of the sky over 4 years (2010-2014), everything with a declination greater than -30, in 5 filters (g,r,i,z,y), with at least 12 epochs per filter per area of sky. DR1, released in December 2016, and available to the public at http://stsci.panstarrs.edu, consists of two parts: the stacked images with a 5 sigma depth of (23.3,23.2,23.1,22.3,21.3) for (g,r,i,z,y), and the catalog database, which consists of 10 billion distinct objects, their mean properties from single exposures, and stack photometry. DR2, to be released early 2108, will contain the individual exposure images, with a 5 sigma depth of (22.0,21.8,21.5,20.9,19.7) for (g,r,i,z,y), and the time domain catalogs, from the 374k individual exposures taken for the 3pi survey. I will primarily focus on the catalog database, describing a subset of the tables and different use cases for them. Specifically, I will describe the major tables and metadata of DR1 - objects, their mean properties, and stack photometry, when different tables should be used, and basics on how to filter the data.
An Ensemble Framework Coping with Instability in the Gene Selection Process.
Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M
2018-03-01
This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.
PCDD/F emissions during startup and shutdown of a hazardous waste incinerator.
Li, Min; Wang, Chao; Cen, Kefa; Ni, Mingjiang; Li, Xiaodong
2017-08-01
Compared with municipal solid waste incineration, studies on the PCDD/F emissions of hazardous waste incineration (HWI) under transient conditions are rather few. This study investigates the PCDD/F emission level, congener profile and removal efficiency recorded during startup and shutdown by collecting flue gas samples at the bag filter inlet and outlet and at the stack. The PCDD/F concentration measured in the stack gas during startup and shutdown were 0.56-4.16 ng I-TEQ Nm -3 and 1.09-3.36 ng I-TEQ Nm -3 , respectively, far exceeding the present codes in China. The total amount of PCDD/F emissions, resulting from three shutdown-startup cycles of this HWI-unit is almost equal to that generated during one year under normal operating conditions. Upstream the filter, the PCDD/F in the flue gas is mainly in the particle phase; however, after being filtered PCDD/F prevails in the gas phase. The PCDD/F fraction in the gas phase even exceeds 98% after passing through the alkaline scrubber. Especially higher chlorinated PCDD/F accumulate on inner walls of filters and ducts during these startup periods and could be released again during normal operation, significantly increasing PCDD/F emissions. Copyright © 2017. Published by Elsevier Ltd.
FPGA-Based Optical Cavity Phase Stabilization for Coherent Pulse Stacking
Xu, Yilun; Wilcox, Russell; Byrd, John; ...
2017-11-20
Coherent pulse stacking (CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy from fiber lasers. We develop a robust, scalable, and distributed digital control system with firmware and software integration for algorithms, to support the CPS application. We model CPS as a digital filter in the Z domain and implement a pulse-pattern-based cavity phase detection algorithm on an field-programmable gate array (FPGA). A two-stage (2+1 cavities) 15-pulse stacking system achieves an 11.0 peak-power enhancement factor. Each optical cavity is fed back at 1.5kHz, and stabilized at anmore » individually-prescribed round-trip phase with 0.7deg and 2.1deg rms phase errors for Stages 1 and 2, respectively. Optical cavity phase control with nanometer accuracy ensures 1.2% intensity stability of the stacked pulse over 12 h. The FPGA-based feedback control system can be scaled to large numbers of optical cavities.« less
FPGA-Based Optical Cavity Phase Stabilization for Coherent Pulse Stacking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yilun; Wilcox, Russell; Byrd, John
Coherent pulse stacking (CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy from fiber lasers. We develop a robust, scalable, and distributed digital control system with firmware and software integration for algorithms, to support the CPS application. We model CPS as a digital filter in the Z domain and implement a pulse-pattern-based cavity phase detection algorithm on an field-programmable gate array (FPGA). A two-stage (2+1 cavities) 15-pulse stacking system achieves an 11.0 peak-power enhancement factor. Each optical cavity is fed back at 1.5kHz, and stabilized at anmore » individually-prescribed round-trip phase with 0.7deg and 2.1deg rms phase errors for Stages 1 and 2, respectively. Optical cavity phase control with nanometer accuracy ensures 1.2% intensity stability of the stacked pulse over 12 h. The FPGA-based feedback control system can be scaled to large numbers of optical cavities.« less
NASA Astrophysics Data System (ADS)
Vo, Martin
2017-08-01
Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
Imaging the Moon's Core with Seismology
NASA Technical Reports Server (NTRS)
Weber, Renee C.; Lin, Pei-Ying Patty; Garnero, Ed J.; Williams, Quetin C.; Lognonne, Philippe
2011-01-01
Constraining the structure of the lunar core is necessary to improve our understanding of the present-day thermal structure of the interior and the history of a lunar dynamo, as well as the origin and thermal and compositional evolution of the Moon. We analyze Apollo deep moonquake seismograms using terrestrial array processing methods to search for the presence of reflected and converted energy from the lunar core. Although moonquake fault parameters are not constrained, we first explore a suite of theoretical focal spheres to verify that fault planes exist that can produce favorable core reflection amplitudes relative to direct up-going energy at the Apollo stations. Beginning with stacks of event seismograms from the known distribution of deep moonquake clusters, we apply a polarization filter to account for the effects of seismic scattering that (a) partitions energy away from expected components of ground motion, and (b) obscures all but the main P- and S-wave arrivals. The filtered traces are then shifted to the predicted arrival time of a core phase (e.g. PcP) and stacked to enhance subtle arrivals associated with the Moon s core. This combination of filtering and array processing is well suited for detecting deep lunar seismic reflections, since we do not expect scattered wave energy from near surface (or deeper) structure recorded at varying epicentral distances and stations from varying moonquakes at varying depths to stack coherently. Our results indicate the presence of a solid inner and fluid outer core, overlain by a partial-melt-containing boundary layer (Table 1). These layers are consistently observed among stacks from four classes of reflections: P-to-P, S-to-P, P-to-S, and S-to-S, and are consistent with current indirect geophysical estimates of core and deep mantle properties, including mass, moment of inertia, lunar laser ranging, and electromagnetic induction. Future refinements are expected following the successful launch of the GRAIL lunar orbiter and SELENE 2 lunar lander missions.
Polarization-Insensitive Tunable Optical Filters based on Liquid Crystal Polarization Gratings
NASA Astrophysics Data System (ADS)
Nicolescu, Elena
Tunable optical filters are widely used for a variety of applications including spectroscopy, optical communication networks, remote sensing, and biomedical imaging and diagnostics. All of these application areas can greatly benefit from improvements in the key characteristics of the tunable optical filters embedded in them. Some of these key parameters include peak transmittance, bandwidth, tuning range, and transition width. In recent years research efforts have also focused on miniaturizing tunable optical filters into physically small packages for compact portable spectroscopy and hyperspectral imaging applications such as real-time medical diagnostics and defense applications. However, it is important that miniaturization not have a detrimental effect on filter performance. The overarching theme of this dissertation is to explore novel configurations of Polarization Gratings (PGs) as simple, low-cost, polarization-insensitive alternatives to conventional optical filtering technologies for applications including hyperspectral imaging and telecommunications. We approach this goal from several directions with a combination of theory and experimental demonstration leading to, in our opinion, a significant contribution to the field. We present three classes of tunable optical filters, the first of which is an angle-filtering scheme where the stop-band wavelengths are redirected off axis and the passband is transmitted on-axis. This is achieved using a stacked configuration of polarization gratings of various thicknesses. To improve this class of filter, we also introduce a novel optical element, the Bilayer Polarization Grating, exhibiting unique optical properties and demonstrating complex anchoring conditions with high quality. The second class of optical filter is analogous to a Lyot filter, utilizing stacks of static or tunable waveplates sandwiched with polarizing elements. However, we introduce a new configuration using PGs and static waveplates to replace the polarizers in the system, thereby greatly increasing the filter throughput. We then turn our attention to a Fourier filtering technique. This is a fundamentally different filtering approach involving a single PG where the filtering functionality involves selecting a spectral band with a movable aperture or slit and a diffractive element (PG in our case). Finally, we study the integration of a PG in a multi-channel wavelength blocker system focusing on the practical and fundamental limitations of using a PG as a variable optical attenuator/wavelength blocker in a commercial optical telecommunications network.
Efficient composite broadband polarization retarders and polarization filters
NASA Astrophysics Data System (ADS)
Dimova, E.; Ivanov, S. S.; Popkirov, G.; Vitanov, N. V.
2014-12-01
A new type of broadband polarization half-wave retarder and narrowband polarization filters are described and experimentally tested. Both, the retarders and the filters are designed as composite stacks of standard optical half-wave plates, each of them twisted at specific angles. The theoretical background of the proposed optical devices was obtained by analogy with the method of composite pulses, known from the nuclear and quantum physics. We show that combining two composite filters built from different numbers and types of waveplates, the transmission spectrum is reduced from about 700 nm to about 10 nm width.We experimentally demonstrate that this method can be applied to different types of waveplates (broadband, zero-order, multiple order, etc.).
Narrowband diode laser pump module for pumping alkali vapors.
Rotondaro, M D; Zhdanov, B V; Shaffer, M K; Knize, R J
2018-04-16
We describe a method of line narrowing and frequency-locking a diode laser stack to an alkali atomic line for use as a pump module for Diode Pumped Alkali Lasers. The pump module consists of a 600 W antireflection coated diode laser stack configured to lase using an external cavity. The line narrowing and frequency locking is accomplished by introducing a narrowband polarization filter based on magneto-optical Faraday effect into the external cavity, which selectively transmits only the frequencies that are in resonance with the 6 2 S 1/2 → 6 2 P 3/2 transition of Cs atoms. The resulting pump module has demonstrated that a diode laser stack, which lases with a line width of 3 THz without narrowbanding, can be narrowed to 10 GHz. The line narrowed pump module produced 518 Watts that is 80% of the power generated by the original broadband diode laser stack.
NASA Astrophysics Data System (ADS)
Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-01-01
A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.
Stacked, filtered multi-channel X-ray diode array
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacNeil, Lawrence; Dutra, Eric; Raphaelian, Mark
2015-08-01
There are many types of X-ray diodes used for X-ray flux or spectroscopic measurements and for estimating the spectral shape of the VUV to soft X-ray spectrum. However, a need exists for a low-cost, robust X-ray diode to use for experiments in hostile environments on multiple platforms, and for experiments that utilize forces that may destroy the diode(s). Since the typical proposed use required a small size with a minimal single line-of-sight, a parallel array could not be used. So, a stacked, filtered multi-channel X-ray diode array was developed, called the MiniXRD. To achieve significant cost savings while maintaining robustnessmore » and ease of field setup, repair, and replacement, we designed the system to be modular. The filters were manufactured in-house and cover the range from 450 eV to 5000 eV. To achieve the line-of-sight accuracy needed, we developed mounts and laser alignment techniques. We modeled and tested elements of the diode design at NSTec Livermore Operations (NSTec / LO) to determine temporal response and dynamic range, leading to diode shape and circuitry changes to optimize impedance and charge storage. The authors fielded individual and stacked systems at several national facilities as ancillary "ride-along" diagnostics to test and improve the design usability. This paper presents the MiniXRD system performance, which supports consideration as a viable low-costalternative for multiple-channel low-energy X-ray measurements. This diode array is currently at Technical Readiness Level (TRL) 6.« less
Joint deconvolution and classification with applications to passive acoustic underwater multipath.
Anderson, Hyrum S; Gupta, Maya R
2008-11-01
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Studies on spatio-temporal filtering of GNSS-derived coordinates
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Bogusz, Janusz; Kłos, Anna; Figurski, Mariusz
2015-04-01
The information about lithospheric deformations may be obtained nowadays by analysis of velocity field derived from permanent GNSS (Global Navigation Satellite System) observations. Despite developing more and more reliable models, the permanent stations residuals must still be considered as coloured noise. Meeting the GGOS (Global Geodetic Observing System) requirements, we are obliged to investigate the correlations between residuals, which are the result of common mode error (CME). This type of error may arise from mismodelling of: satellite orbits, the Earth Orientation Parameters, satellite antenna phase centre variations or unmodelling of large scale atmospheric effects. The above described together cause correlations between stochastic parts of coordinate time series obtained at stations located of even few thousands kilometres from each other. Permanent stations that meet the aforementioned terms form the regional (EPN - EUREF Permanent Network) or local sub-networks of global (IGS - International GNSS Service) network. Other authors (Wdowinski et al., 1997; Dong et al., 2006) dealt with spatio-temporal filtering and indicated three major regional filtering approaches: the stacking, the Principal Component Analysis (PCA) based on the empirical orthogonal function and the Karhunen-Loeve expansion. The need for spatio-temporal filtering is evident today, but the question whether the size of the network affects the accuracy of station's position and its velocity still remains unanswered. With the aim to determine the network's size, for which the assumption of spatial uniform distribution of CME is retained, we used stacking approach. We analyzed time series of IGS stations with daily network solutions processed by the Military University of Technology EPN Local Analysis Centre in Bernese 5.0 software and compared it with the JPL (Jet Propulsion Laboratory) PPP (Precice Point Positioning). The method we propose is based on the division of local GNSS networks into concentric ring-shaped areas. Such an approach allows us to specify the maximum size of the network, where the evident uniform spatial response can be still noticed. In terms of reliable CMEs extraction, the local networks have to be up to 500-600 kilometres extent depending on its character (location). In this study we examined three approaches of spatio-temporal filtering based on stacking procedure. First was based on non-weighted (Wdowinski et. al., 1997) and second on weighted average formula, where the weights are formed by the RMS of individual station position in the corresponding epoch (Nikolaidis, 2002). The third stacking approach, proposed here, was previously unused. It combines the weighted stacking together with the distance between the station and network barycentre into one approach. The analysis allowed to determine the optimal size of local GNSS network and to select the appropriate stacking method for obtaining the most stable solutions for e.g. geodynamical studies. The values of L1 and L2 norms, RMS values of time series (describing stability of the time series) and Pearson correlation coefficients were calculated for the North, East and Up components from more than 200 permanent stations twice: before performing the filtration and after weighted stacking approach. We showed the improvement in the quality of time series analysis using MLE (Maximum Likelihood Estimation) to estimate noise parameters. We demonstrated that the relative RMS improvement of 10, 20 and 30% reduces the noise amplitudes of about 20, 35 and 45%, respectively, what causes the velocity uncertainty to be reduced of 0.3 mm/yr (for the assumption of 7-years of data and flicker noise). The relative decrement of spectral index kappa is 25, 35 and 45%, what means lower velocity uncertainty of even 0.2 mm/yr (when assuming 7 years of data and noise amplitude of 15 mm/yr^-kappa/4) . These results refer to the growing demands on the stability of the series due to their use to realize the kinematic reference frames and for geodynamical studies.
Stacking with stochastic cooling
NASA Astrophysics Data System (ADS)
Caspers, Fritz; Möhl, Dieter
2004-10-01
Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.
Facial expression recognition based on improved local ternary pattern and stacked auto-encoder
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.
Finding Malicious Cyber Discussions in Social Media
2015-12-11
automatically filter cyber discussions from Stack Exchange, Reddit, and Twitter posts written in English. Criminal hackers often use social media...monitoring hackers on Facebook and in private chat rooms. As a result, system administrators were prepared to counter distributed denial-of-service
Automatic target recognition and detection in infrared imagery under cluttered background
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.
2017-10-01
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
RDTC [Restricted Data Transmission Controller] global variable definitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grambihler, A.J.; O`Callaghan, P.B.
The purpose of the Restricted Data Transmission Controller (RDTC) is to demonstrate a methodology for transmitting data between computers which have different levels of classification. The RDTC does this by logically filtering the data being transmitted between the two computers. This prototype is set up to filter data from the classified computer so that only numeric data is passed to the unclassified computer. The RDTC allows all data from the unclassified computer to be sent to the classified computer. The classified system is referred to as LUA and the unclassified system is referred to as LUB. 9 tabs.
ISO Guest Observer Data Analysis and LWS Instrument Team Activities
NASA Technical Reports Server (NTRS)
Oliversen, Ronald J. (Technical Monitor); Smith, Howard A.
2003-01-01
We have designed and fabricated infrared filters for use at wavelengths greater than or equal to 15 microns. Unlike conventional dielectric filters used at the short wavelengths, ours are made from stacked metal grids, spaced at a very small fraction of the performance wavelengths. The individual lattice layers are gold, the spacers are polyimide, and they are assembled using integrated circuit processing techniques; they resemble some metallic photonic band-gap structures. We simulate the filter performance accurately, including the coupling of the propagating, near-field electromagnetic modes, using computer aided design codes. We find no anomalous absorption. The geometrical parameters of the grids are easily altered in practice, allowing for the production of tuned filters with predictable useful transmission characteristics. Although developed for astronomical instrumentation, the filters are broadly applicable in systems across infrared and terahertz bands.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Zou, Jie; Le, Daniel X.; Thoma, George
2010-01-01
"Investigator Names" is a newly required field in MEDLINE citations. It consists of personal names listed as members of corporate organizations in an article. Extracting investigator names automatically is necessary because of the increasing volume of articles reporting collaborative biomedical research in which a large number of investigators participate. In this paper, we present an SVM-based stacked sequential learning method in a novel application - recognizing named entities such as the first and last names of investigators from online medical journal articles. Stacked sequential learning is a meta-learning algorithm which can boost any base learner. It exploits contextual information by adding the predicted labels of the surrounding tokens as features. We apply this method to tag words in text paragraphs containing investigator names, and demonstrate that stacked sequential learning improves the performance of a nonsequential base learner such as an SVM classifier.
42 CFR 84.170 - Non-powered air-purifying particulate respirators; description.
Code of Federal Regulations, 2013 CFR
2013-10-01
... inhalation pressure to draw the ambient air through the air-purifying filter elements (filters) to remove... classified into three series, N-, R-, and P-series. The N-series filters are restricted to use in those workplaces free of oil aerosols. The R- and P-series filters are intended for removal of any particulate that...
42 CFR 84.170 - Non-powered air-purifying particulate respirators; description.
Code of Federal Regulations, 2011 CFR
2011-10-01
... inhalation pressure to draw the ambient air through the air-purifying filter elements (filters) to remove... classified into three series, N-, R-, and P-series. The N-series filters are restricted to use in those workplaces free of oil aerosols. The R- and P-series filters are intended for removal of any particulate that...
42 CFR 84.170 - Non-powered air-purifying particulate respirators; description.
Code of Federal Regulations, 2014 CFR
2014-10-01
... inhalation pressure to draw the ambient air through the air-purifying filter elements (filters) to remove... classified into three series, N-, R-, and P-series. The N-series filters are restricted to use in those workplaces free of oil aerosols. The R- and P-series filters are intended for removal of any particulate that...
42 CFR 84.170 - Non-powered air-purifying particulate respirators; description.
Code of Federal Regulations, 2012 CFR
2012-10-01
... inhalation pressure to draw the ambient air through the air-purifying filter elements (filters) to remove... classified into three series, N-, R-, and P-series. The N-series filters are restricted to use in those workplaces free of oil aerosols. The R- and P-series filters are intended for removal of any particulate that...
42 CFR 84.170 - Non-powered air-purifying particulate respirators; description.
Code of Federal Regulations, 2010 CFR
2010-10-01
... inhalation pressure to draw the ambient air through the air-purifying filter elements (filters) to remove... classified into three series, N-, R-, and P-series. The N-series filters are restricted to use in those workplaces free of oil aerosols. The R- and P-series filters are intended for removal of any particulate that...
Blocking Filters with Enhanced Throughput for X-Ray Microcalorimetry
NASA Technical Reports Server (NTRS)
Grove, David; Betcher, Jacob; Hagen, Mark
2012-01-01
New and improved blocking filters (see figure) have been developed for microcalorimeters on several mission payloads, made of high-transmission polyimide support mesh, that can replace the nickel mesh used in previous blocking filter flight designs. To realize the resolution and signal sensitivity of today s x-ray microcalorimeters, significant improvements in the blocking filter stack are needed. Using high-transmission polyimide support mesh, it is possible to improve overall throughput on a typical microcalorimeter such as Suzaku s X-ray Spectrometer by 11%, compared to previous flight designs. Using polyimide to replace standard metal mesh means the mesh will be transparent to energies 3 keV and higher. Incorporating polyimide s advantageous strength-to-weight ratio, thermal stability, and transmission characteristics permits thinner filter materials, significantly enhancing through - put. A prototype contamination blocking filter for ASTRO-H has passed QT-level acoustic testing. Resistive traces can also be incorporated to provide decontamination capability to actively restore filter performance in orbit.
Fluoride coatings for vacuum ultraviolet reflection filters.
Guo, Chun; Kong, Mingdong; Lin, Dawei; Li, Bincheng
2015-12-10
LaF3/MgF2 reflection filters with a high spectral-discrimination capacity of the atomic-oxygen lines at 130.4 and 135.6 nm, which were employed in vacuum ultraviolet imagers, were prepared by molybdenum-boat thermal evaporation. The optical properties of reflection filters were characterized by a high-precision vacuum ultraviolet spectrophotometer. The vulnerability of the filter's microstructures to environmental contamination and the recovery of the optical properties of the stored filter samples with ultraviolet ozone cleaning were experimentally demonstrated. For reflection filters with the optimized nonquarter-wave multilayer structures, the reflectance ratios R135.6 nm/R130.4 nm of 92.7 and 20.6 were achieved for 7° and 45° angles of incidence, respectively. On the contrary, R135.6 nm/R130.4 nm ratio of 12.4 was obtained for a reflection filter with a standard π-stack multilayer structure with H/L=1/4 at 7° AOI.
Adaptive attenuation of aliased ground roll using the shearlet transform
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam
2015-01-01
Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.
Use of electrochromic materials in adaptive optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammler, Daniel R.; Sweatt, William C.; Verley, Jason C.
Electrochromic (EC) materials are used in 'smart' windows that can be darkened by applying a voltage across an EC stack on the window. The associated change in refractive index (n) in the EC materials might allow their use in tunable or temperature-insensitive Fabry-Perot filters and transmissive-spatial-light-modulators (SLMs). The authors are conducting a preliminary evaluation of these materials in many applications, including target-in-the-loop systems. Data on tungsten oxide, WO{sub 3}, the workhorse EC material, indicate that it's possible to achieve modest changes in n with only slight increases in absorption between the visible and {approx}10 {micro}m. This might enable construction ofmore » a tunable Fabry-Perot filter consisting of an active EC layer (e.g. WO{sub 3}) and a proton conductor (e.g.Ta{sub 2}O{sub 5}) sandwiched between two gold electrodes. A SLM might be produced by replacing the gold with a transparent conductor (e.g. ITO). This SLM would allow broad-band operation like a micromirror array. Since it's a transmission element, simple optical designs like those in liquid-crystal systems would be possible. Our team has fabricated EC stacks and characterized their switching speed and optical properties (n, k). We plan to study the interplay between process parameters, film properties, and performance characteristics associated with the FP-filter and then extend what we learn to SLMs. Our goals are to understand whether the changes in absorption associated with changes in n are acceptable, and whether it's possible to design an EC-stack that's fast enough to be interesting. We'll present our preliminary findings regarding the potential viability of EC materials for target-in-the-loop applications.« less
A broadband permeability measurement of FeTaN lamination stack by the shorted microstrip line method
NASA Astrophysics Data System (ADS)
Chen, Xin; Ma, Yungui; Xu, Feng; Wang, Peng; Ong, C. K.
2009-01-01
In this paper, the microwave characteristics of a FeTaN lamination stack are studied with a shorted microstrip line method. The FeTaN lamination stack was fabricated by gluing 54 layers of FeTaN units with epoxy together. The FeTaN units were deposited on both sides of an 8 μm polyethylene terephthate (Mylar) film as the substrate by rf magnetron sputtering. On each side of the Mylar substrate, three 100-nm FeTaN layers are laminated with two 8 nm Al2O3 layers. The complex permeability of FeTaN lamination stack is calculated by the scattering parameters using the shorted load transmission line model based on the quasi-transverse-electromagnetic approximation. A full wave analysis combined with an optimization process is employed to determine the accurate effective permeability values. The optimized complex permeability data can be used for the microwave filter design.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi
2011-02-01
A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.
Emissions of polycyclic aromatic hydrocarbons from batch hot mix asphalt plants.
Lee, Wen-Jhy; Chao, Wen-Hui; Shih, Minliang; Tsai, Cheng-Hsien; Chen, Thomas Jeng-Ho; Tsai, Perng-Jy
2004-10-15
This study was set out to assess the characteristics of polycyclic aromatic hydrocarbon (PAH) emissions from batch hot mix asphalt (HMA) plants and PAH removal efficiencies associated with their installed air pollution control devices. Field samplings were conducted on six randomly selected batch HMA plants. For each selected plant, stack flue gas samples were collected from both stacks of the batch mixer (n = 5) and the preheating boiler (n = 5), respectively. PAH samples were also collected from the field to assess PAHs that were directly emitted from the discharging chute (n = 3). To assess PAH removal efficiencies of the installed air pollution control devices, PAH contents in both cyclone fly ash (n=3) and bag filter fly ash (n = 3) were analyzed. Results show that the total PAH concentration (mean; RSD) in the stack flue gas of the batch mixer (354 microg/Nm3; 78.5%) was higher than that emitted from the discharging chute (107 microg/Nm3; 70.1%) and that in the stack flue gas of the preheating boiler (83.7 microg/Nm3; 77.6%). But the total BaPeq concentration of that emitted from the discharging chute (0.950 microg/Nm3; 84.4%) was higher than contained in the stack flue gas of the batch mixer (0.629 microg/Nm3; 86.8%) and the stack flue gas of the preheating boiler (= 0.112 microg/Nm3; 80.3%). The mean total PAH emission factor for all selected batch mix plants (= 139 mg/ton x product) was much higher than that reported by U.S. EPA for the drum mix asphalt plant (range = 11.8-79.0 mg/ton x product). We found the overall removal efficiency of the installed air pollution control devices (i.e., cyclone + bag filter) on total PAHs and total BaPeq were 22.1% and 93.7%, respectively. This implies that the installed air pollution control devices, although they have a very limited effect on the removal of total PAHs, do significantly reduce the carcinogenic potencies associated with PAH emissions from batch HMA plants.
Pre-stack depth Migration imaging of the Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hussni, S. G.; Becel, A.; Schenini, L.; Laigle, M.; Dessa, J. X.; Galve, A.; Vitard, C.
2017-12-01
In 365 AD, a major M>8-tsunamignic earthquake occurred along the southwestern segment of the Hellenic subduction zone. Although this is the largest seismic event ever reported in Europe, some fundamental questions remain regarding the deep geometry of the interplate megathrust, as well as other faults within the overriding plate potentially connected to it. The main objective here is to image those deep structures, whose depths range between 15 and 45 km, using leading edge seismic reflection equipment. To this end, a 210-km-long multichannel seismic profile was acquired with the 8 km-long streamer and the 6600 cu.in source of R/V Marcus Langseth. This was realized at the end of 2015, during the SISMED cruise. This survey was made possible through a collective effort gathering labs (Géoazur, LDEO, ISTEP, ENS-Paris, EOST, LDO, Dpt. Geosciences of Pau Univ). A preliminary processing sequence has first been applied using Geovation software of CGG, which yielded a post-stack time migration of collected data, as well as pre-stack time migration obtained with a model derived from velocity analyses. Using Paradigm software, a pre-stack depth migration was subsequently carried out. This step required some tuning in the pre-processing sequence in order to improve multiple removal, noise suppression and to better reveal the true geometry of reflectors in depth. This iteration of pre-processing included, the use of parabolic Radon transform, FK filtering and time variant band pass filtering. An initial velocity model was built using depth-converted RMS velocities obtained from SISMED data for the sedimentary layer, complemented at depth with a smooth version of the tomographic velocities derived from coincident wide-angle data acquired during the 2012-ULYSSE survey. Then, we performed a Kirchhoff Pre-stack depth migration with traveltimes calculated using the Eikonal equation. Velocity model were then tuned through residual velocity analyses to flatten reflections in common reflection point gathers. These new results improve the imaging of deep reflectors and even reveal some new features. We will present this work, a comparison with our previously obtained post-stack time migration, as well as some insights into the new geological structures revealed by the depth imaging.
On Orbit ISS Oxygen Generation System Operation Status
NASA Technical Reports Server (NTRS)
Diderich, Greg S.; Polis, Pete; VanKeuren, Steven P.; Erickson, Robert; Mason, Richard
2011-01-01
The International Space Station (ISS) United States Orbital Segment (USOS) Oxygen Generation System (OGS) has accumulated almost a year of operation at varied oxygen production rates within the US Laboratory Module (LAB) since it was first activated in July 2007. It was operated intermittently through 2009 and 2010, due to filter clogging and acid accumulation in the recirculation loop. Since the installation of a deionizing bed in the recirculation loop in May of 2011 the OGA has been operated continuously. Filters in the recirculation loop have clogged and have been replaced. Hydrogen sensors have drifted apart, and a power failure may have condensed water on a hydrogen sensor. A pump delta pressure sensor failed, and a replacement new spare pump failed to start. Finally, the voltage across the cell stack increased out of tolerance due to cation contamination, and the cell stack was replaced. This paper will discuss the operating experience and characteristics of the OGS, as well as operational issues and their resolution.
Nondestructive assay of EBR-II blanket elements using resonance transmission analysis
NASA Astrophysics Data System (ADS)
Klann, Raymond Todd
1998-10-01
Resonance transmission analysis utilizing a filtered reactor beam was examined as a means of determining the 239Pu content in Experimental Breeder Reactor - II depleted uranium blanket elements. The technique uses cadmium and gadolinium filters along with a 239Pu fission chamber to isolate the 0.3 eV resonance in 239Pu. In the energy range of this resonance (0.1 eV to 0.5 eV), the total microscopic cross-section of 239Pu is significantly greater than the cross- sections of 238U and 235U. This large difference allows small changes in the 239Pu content of a sample to result in large changes in the mass signal response. Tests with small stacks of depleted uranium and 239Pu foils indicate a significant change in response based on the 239Pu content of the foil stack. In addition, the tests indicate good agreement between the measured and predicted values of 239Pu up to approximately two weight percent.
An application of LOTEM around salt dome near Houston, Texas
NASA Astrophysics Data System (ADS)
Paembonan, Andri Yadi; Arjwech, Rungroj; Davydycheva, Sofia; Smirnov, Maxim; Strack, Kurt M.
2017-07-01
A salt dome is an important large geologic structure for hydrocarbon exploration. It may seal a porous reservoir of rocks that form petroleum reservoirs. Several techniques such as seismic, gravity, and electromagnetic including magnetotelluric have successfully yielded salt dome interpretation. Seismic has difficulties seeing through the salt because the seismic energy gets trapped by the salt due to its high velocity. Gravity and electromagnetics are more ideal methods. Long Offset Transient Electromagnetic (LOTEM) and Focused Source Electromagnetic (FSEM) were tested over a salt dome near Houston, Texas. LOTEM data were recorded at several stations with varying offset, and the FSEM tests were also made at some receiver locations near a suspected salt overhang. The data were processed using KMS's processing software: First, for assurance, including calibration and header checking; then transmitter and receiver data are merged and microseismic data is separated; Finally, data analysis and processing follows. LOTEM processing leads to inversion or in the FSEM case 3D modeling. Various 3D models verify the sensitivity under the salt dome. In addition, the processing was conducted pre-stack, stack, and post-stack. After pre-stacking, the noise was reduced, but showed the ringing effect due to a low-pass filter. Stacking and post-stacking with applying recursive average could reduce the Gibbs effect and produce smooth data.
Dadisman, Shawn V.; Ryan, Holly F.; Mann, Dennis M.
1987-01-01
During 1984, over 2300 km of multichannel seismic-reflection data were recorded by the U.S. Geological Survey in the western Ross Sea and Iselin Bank regions. A temporary loss and sinking of the streamer led to increasing the streamer tow depth to 20 m, which resulted in some attenuation of frequencies in the 30-50 Hz range but no significant difference in resolution of the stacked data. Severe water bottom multiples were encountered and removed by dip-filtering, weighted stacking, and severe post-NMO muting.
NASA Astrophysics Data System (ADS)
Baratin, Laura-May; Chamberlain, Calum J.; Townend, John; Savage, Martha K.
2017-04-01
Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the state of stress of this major transpressive margin prior to a large (≥M8) earthquake. Here, we use recently detected tectonic tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault toward an anticipated large rupture. We initially work with a continous seismic dataset collected between 2009 and 2012 from an array of short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates, found through visual inspection within previously identified tectonic tremor, are used in an iterative matched-filter and stacking routine. This method allows the detection of similar signals and establishes LFE families with common locations. We thus generate a 36 month catalogue of 10718 LFEs. The detections are then combined for each LFE family using phase-weighted stacking to yield a signal with the highest possible signal to noise ratio. We found phase-weighted stacking to be successful in increasing the number of LFE detections by roughly 20%. Phase-weighted stacking also provides cleaner phase arrivals of apparently impulsive nature allowing more precise phase picks. We then compute non-linear earthquake locations using a 3D velocity model and find LFEs to occur below the seismogenic zone at depths of 18-34 km, locating on or near the proposed deep extent of the Alpine Fault. To gain insight into deep fault slip behaviour, a detailed study of the spatial-temporal evolution of LFEs is required. We thus generate a more extensive catalogue of LFEs spanning the years 2009 to 2016 using a different technique to detect LFEs more efficiently. This time 638 synthetic waveforms are used as primary templates in the match-filter routine. Of those, 38 templates yield no detections over our 7-yr study period. The remaining 600 templates end up detecting between 370 and 730 events each totalling ˜310 000 detections. We then focus on only keeping the detections that robustly stack (i.e. representing real LFEs) for each synthetic template hence creating new LFE templates. From there, we rerun the match-filter routine with our new LFE templates. Finally, each LFE template and its subsequent detections form a LFE family, itself associated with a single source. Initial testing shows that this technique paired up with phase-weighted stacking increases the number of LFE families and overall detected events roughly thirtyfold. Our next step is to study in detail the spatial and temporal activity of our LFEs. This new catalogue should provide new insight into the deep central Alpine Fault structure and its slip behaviour.
Search for Long Period Solar Normal Modes in Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Caton, R.; Pavlis, G. L.
2016-12-01
We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.
Far infrared filters for the Galileo-Jupiter and other missions
NASA Technical Reports Server (NTRS)
Seeley, J. S.; Hunneman, R.; Whatley, A.
1981-01-01
Progress in the development of FIR multilayer interference filters for the net flux radiometer and photopolarizing radiometer to be carried on board the Galileo mission to Jupiter is reported. The multilayer interference technique has been extended to the region above 40 microns by the use of PbTe/II-VI materials in hard-coated combination, with the thickest layers composed of CdSe QWOT at 74 microns and PbTe QWOT. Improvements have also been obtained in filters below 20 microns on the basis of the Chebyshev stack design. A composite filter cutting on steeply at 40 microns has been designed which employs a thin crystal quartz substrate, shorter wavelength absorption in ZnS and As2S3 thin films, and supplementary multilayer interference. Finally, absorptive filters have been developed based on II-VI compounds in multilayer combination with KRS-5 (or 6) on a KRS-5 (or 6) substrate
CHAMP - Camera, Handlens, and Microscope Probe
NASA Technical Reports Server (NTRS)
Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.
Radiation Hard Bandpass Filters for Mid- to Far-IR Planetary Instruments
NASA Technical Reports Server (NTRS)
Brown, Ari D.; Aslam, Shahid; Chervenack, James A.; Huang, Wei-Chung; Merrell, Willie C.; Quijada, Manuel; Steptoe-Jackson, Rosalind; Wollack, Edward J.
2012-01-01
We present a novel method to fabricate compact metal mesh bandpass filters for use in mid- to far-infrared planetary instruments operating in the 20-600 micron wavelength spectral regime. Our target applications include thermal mapping instruments on ESA's JUICE as well as on a de-scoped JEO. These filters are novel because they are compact, customizable, free-standing copper mesh resonant bandpass filters with micromachined silicon support frames. The filters are well suited for thermal mapping mission to the outer planets and their moons because the filter material is radiation hard. Furthermore, the silicon support frame allows for effective hybridization with sensors made on silicon substrates. Using a Fourier Transform Spectrometer, we have demonstrated high transmittance within the passband as well as good out-of-band rejection [1]. In addition, we have developed a unique method of filter stacking in order to increase the bandwidth and sharpen the roll-off of the filters. This method allows one to reliably control the spacing between filters to within 2 microns. Furthermore, our method allows for reliable control over the relative position and orienta-tion between the shared faces of the filters.
Stacking metal nano-patterns and fabrication of moth-eye structure
NASA Astrophysics Data System (ADS)
Taniguchi, Jun
2018-01-01
Nanoimprint lithography (NIL) can be used as a tool for three-dimensional nanoscale fabrication. In particular, complex metal pattern structures in polymer material are demanded as plasmonic effect devices and metamaterials. To fabricate of metallic color filter, we used silver ink and NIL techniques. Metallic color filter was composed of stacking of nanoscale silver disc patterns and polymer layers, thus, controlling of polymer layer thickness is necessary. To control of thickness of polymer layer, we used spin-coating of UV-curable polymer and NIL. As a result, ten stacking layers with 1000 nm layer thickness was obtained and red color was observed. Ultraviolet nanoimprint lithography (UV-NIL) is the most effective technique for mass fabrication of antireflection structure (ARS) films. For the use of ARS films in mobile phones and tablet PCs, which are touch-screen devices, it is important to protect the films from fingerprints and dust. In addition, as the nanoscale ARS that is touched by the hand is fragile, it is very important to obtain a high abrasion resistance. To solve these problems, a UV-curable epoxy resin has been developed that exhibits antifouling properties and high hardness. The high abrasion resistance ARS films are shown to withstand a load of 250 g/cm2 in the steel wool scratch test, and the reflectance is less than 0.4%.
Waste separation: Does it influence municipal waste combustor emissions?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, A.J.; Rigo, H.G.
1996-09-01
It has been suggested that MSW incinerator emissions show significant variations because of the heterogeneous nature of the waste fed to the furnace. This argument has even been used to propose banning certain materials from incinerators. However, data previously reported by the authors suggests that a large portion of the trace metals come from natural sources. Furthermore, full scale incinerator spiking experiments suggest that certain forms of trace metals have minimal effects on stack emissions. Similar studies with chlorinated plastics have failed to identify a significant effect on incinerator dioxin emissions. The implication of segregating the lawn and garden wastemore » and other fines from the furnace feed is explored using data from a 400 tpd mass burn facility equipped with a conditioning tower, dry reactor and fabric filter air pollution control system (APCS) preceded by an NRT separation system. The stack emissions have been tested periodically since commissioning to characterize emissions for various seasons using both processed fuel and raw MSW. Front end processing to remove selected portions of the waste stream based upon size or physical properties, i.e. fines, grass, or ferrous materials, did not result in a statistically significant difference in stack emissions. System operating regime, and in particular those that effect the effective air to cloth ratio in the fabric filter, appear to be the principal influence on emission levels.« less
Rine, J.M.; Shafer, J.M.; Covington, E.; Berg, R.C.
2006-01-01
Published information on the correlation and field-testing of the technique of stack-unit/aquifer sensitivity mapping with documented subsurface contaminant plumes is rare. The inherent characteristic of stack-unit mapping, which makes it a superior technique to other analyses that amalgamate data, is the ability to deconstruct the sensitivity analysis on a unit-by-unit basis. An aquifer sensitivity map, delineating the relative sensitivity of the Crouch Branch aquifer of the Administrative/Manufacturing Area (A/M) at the Savannah River Site (SRS) in South Carolina, USA, incorporates six hydrostratigraphic units, surface soil units, and relevant hydrologic data. When this sensitivity map is compared with the distribution of the contaminant tetrachloroethylene (PCE), PCE is present within the Crouch Branch aquifer within an area classified as highly sensitive, even though the PCE was primarily released on the ground surface within areas classified with low aquifer sensitivity. This phenomenon is explained through analysis of the aquifer sensitivity map, the groundwater potentiometric surface maps, and the plume distributions within the area on a unit-by- unit basis. The results of this correlation show how the paths of the PCE plume are influenced by both the geology and the groundwater flow. ?? Springer-Verlag 2006.
Deferred discrimination algorithm (nibbling) for target filter management
NASA Astrophysics Data System (ADS)
Caulfield, H. John; Johnson, John L.
1999-07-01
A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.
Air gap resonant tunneling bandpass filter and polarizer.
Melnyk, A; Bitarafan, M H; Allen, T W; DeCorby, R G
2016-04-15
We describe a bandpass filter based on resonant tunneling through an air layer in the frustrated total internal reflection regime, and show that the concept of induced transmission can be applied to the design of thin film matching stacks. Experimental results are reported for Si/SiO2-based devices exhibiting a polarization-dependent passband, with bandwidth on the order of 10 nm in the 1550 nm wavelength range, peak transmittance on the order of 80%, and optical density greater than 5 over most of the near infrared region.
On-board multicarrier demodulator for mobile applications using DSP implementation
NASA Astrophysics Data System (ADS)
Yim, W. H.; Kwan, C. C. D.; Coakley, F. P.; Evans, B. G.
1990-11-01
This paper describes the design and implementation of an on-board multicarrier demodulator using commercial digital signal processors. This is for use in a mobile satellite communication system employing an up-link SCPC/FDMA scheme. Channels are separated by a flexible multistage digital filter bank followed by a channel multiplexed digital demodulator array. The cross/dot product design approach of error detector leads to a new QPSK frequency control algorithm that allows fast acquisition without special preamble pattern. Timing correction is performed digitally using an extended stack of polyphase sub-filters.
Understanding of the naive Bayes classifier in spam filtering
NASA Astrophysics Data System (ADS)
Wei, Qijia
2018-05-01
Along with the development of the Internet, the information stream is experiencing an unprecedented burst. The methods of information transmission become more and more important and people receiving effective information is a hot topic in the both research and industry field. As one of the most common methods of information communication, email has its own advantages. However, spams always flood the inbox and automatic filtering is needed. This paper is going to discuss this issue from the perspective of Naive Bayes Classifier, which is one of the applications of Bayes Theorem. Concepts and process of Naive Bayes Classifier will be introduced, followed by two examples. Discussion with Machine Learning is made in the last section. Naive Bayes Classifier has been proved to be surprisingly effective, with the limitation of the interdependence among attributes which are usually email words or phrases.
DOT National Transportation Integrated Search
2013-08-01
As a means of controlling mercury (Hg) stack emissions at cement kiln operations, some facilities have proposed or have instituted a practice known as dust shuttling, where baghouse filter dust (BFD) is routed to be blended with the final cement prod...
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark
2007-12-01
To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.
Stack of Layers at 'Payson' in Meridiani Planum
NASA Technical Reports Server (NTRS)
2006-01-01
The stack of fine layers exposed at a ledge called 'Payson' on the western edge of 'Erebus Crater' in Mars' Meridiani Planum shows a diverse range of primary and secondary sedimentary textures formed billions of years ago. These structures likely result from an interplay between windblown and water-involved processes. The panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity acquired the exposures for this image on the rover's 749th Martian day (March 3, 2006) This view is an approximately true-color rendering mathematically generated from separate images taken through all of the left Pancam's 432-nanometer to 753-nanometer filters.NASA Astrophysics Data System (ADS)
da Silva, Flávio Altinier Maximiano; Pedrini, Helio
2015-03-01
Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.
Graphical classification of DNA sequences of HLA alleles by deep learning.
Miyake, Jun; Kaneshita, Yuhei; Asatani, Satoshi; Tagawa, Seiichi; Niioka, Hirohiko; Hirano, Takashi
2018-04-01
Alleles of human leukocyte antigen (HLA)-A DNAs are classified and expressed graphically by using artificial intelligence "Deep Learning (Stacked autoencoder)". Nucleotide sequence data corresponding to the length of 822 bp, collected from the Immuno Polymorphism Database, were compressed to 2-dimensional representation and were plotted. Profiles of the two-dimensional plots indicate that the alleles can be classified as clusters are formed. The two-dimensional plot of HLA-A DNAs gives a clear outlook for characterizing the various alleles.
Optical Coatings With Graded Index Layers For High Power Laser Applications: Design
NASA Astrophysics Data System (ADS)
Zukic, Muamer; Guenther, Karl H.
1988-06-01
Graded index layers provide a greater flexibility for the design of optical coatings than "homogeneous" layers. A graded index layer can replace the whole or a part of a traditional multilayer stack of alternating thin films of high and low refractive index. This paper presents design examples for broadband antireflection coatings, narrowband high reflectors (also referred to as minus filters or rejection line filters), and non-polarizing beam splitters. Optimized refractive index profiles are derived for broadband antireflection coatings for various combinations of incident medium and substrate. The rejection line filter example uses a sinusoidal (rugate) index profile. The non-polarizing beamsplitter summarizes the topical contents of a paper presented in another conference at the same symposium.
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui
2015-12-01
This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.
The Design of Exhaust Systems and Discharge Stacks [With Comments].
ERIC Educational Resources Information Center
Clarke, John H.
1963-01-01
An important part of ventilating for safety consists of providing the necessary exhaust systems to remove building contaminants safely. Further, the effluent must be cleaned within practical limits by means of filters, collectors, and scrubbers. Where recirculation is not safe or feasible, the effluent must be discharged to the outside in a manner…
40 CFR 60.4930 - What definitions must I know?
Code of Federal Regulations, 2011 CFR
2011-07-01
... stack means a device used for discharging combustion gases to avoid severe damage to the air pollution..., 2010. Fabric filter means an add-on air pollution control device used to capture particulate matter by... sludge prior to incineration). (2) A change in the air pollution control devices used to comply with the...
40 CFR 60.4930 - What definitions must I know?
Code of Federal Regulations, 2013 CFR
2013-07-01
... stack means a device used for discharging combustion gases to avoid severe damage to the air pollution..., 2010. Fabric filter means an add-on air pollution control device used to capture particulate matter by... sludge prior to incineration). (2) A change in the air pollution control devices used to comply with the...
40 CFR 60.4930 - What definitions must I know?
Code of Federal Regulations, 2012 CFR
2012-07-01
... stack means a device used for discharging combustion gases to avoid severe damage to the air pollution..., 2010. Fabric filter means an add-on air pollution control device used to capture particulate matter by... sludge prior to incineration). (2) A change in the air pollution control devices used to comply with the...
40 CFR 60.4930 - What definitions must I know?
Code of Federal Regulations, 2014 CFR
2014-07-01
... stack means a device used for discharging combustion gases to avoid severe damage to the air pollution..., 2010. Fabric filter means an add-on air pollution control device used to capture particulate matter by... sludge prior to incineration). (2) A change in the air pollution control devices used to comply with the...
NASA Astrophysics Data System (ADS)
Bingi, J.; Hemalatha, M.; Anita, R. W.; Vijayan, C.; Murukeshan, V. M.
2015-11-01
Light transport and the physical phenomena related to light propagation in random media are very intriguing, they also provide scope for new paradigms of device functionality, most of which remain unexplored. Here we demonstrate, experimentally and by simulation, a novel kind of asymmetric light transmission (diffusion) in a stack of random media (SRM) with graded transport mean free path. The structure is studied in terms of transmission, of photons propagated through and photons generated within the SRM. It is observed that the SRM exhibits asymmetric transmission property with a transmission contrast of 0.25. In addition, it is shown that the SRM works as a perfect optical low-pass filter with a well-defined cutoff wavelength at 580 nm. Further, the photons generated within the SRM found to exhibit functionality similar to an optical diode with a transmission contrast of 0.62. The basis of this functionality is explained in terms of wavelength dependent photon randomization and the graded transport mean free path of SRM.
NASA Astrophysics Data System (ADS)
Ma, Hongchao; Cai, Zhan; Zhang, Liang
2018-01-01
This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.
Assessment of the Revised 3410 Building Filtered Exhaust Stack Sampling Probe Location
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Xiao-Ying; Recknagle, Kurtis P.; Glissmeyer, John A.
2013-12-01
In order to support the air emissions permit for the 3410 Building, Pacific Northwest National Laboratory performed a series of tests in the exhaust air discharge from the reconfigured 3410 Building Filtered Exhaust Stack. The objective was to determine whether the location of the air sampling probe for emissions monitoring meets the applicable regulatory criteria governing such effluent monitoring systems. In particular, the capability of the air sampling probe location to meet the acceptance criteria of ANSI/HPS N13.1-2011 , Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stack and Ducts of Nuclear Facilities was determined. The qualification criteriamore » for these types of stacks address 1) uniformity of air velocity, 2) sufficiently small flow angle with respect to the axis of the duct, 3) uniformity of tracer gas concentration, and 4) uniformity of tracer particle concentration. Testing was performed to conform to the quality requirements of NQA-1-2000. Fan configurations tested included all fan combinations of any two fans at a time. Most of the tests were conducted at the normal flow rate, while a small subset of tests was performed at a slightly higher flow rate achieved with the laboratory hood sashes fully open. The qualification criteria for an air monitoring probe location are taken from ANSI/HPS N13.1-2011 and are paraphrased as follows with key results summarized: 1. Angular Flow—The average air velocity angle must not deviate from the axis of the stack or duct by more than 20°. Our test results show that the mean angular flow angles at the center two-thirds of the ducts are smaller than 4.5% for all testing conditions. 2. Uniform Air Velocity—The acceptance criterion is that the COV of the air velocity must be ≤ 20% across the center two thirds of the area of the stack. Our results show that the COVs of the air velocity across the center two-thirds of the stack are smaller than 2.9% for all testing conditions. 3. Uniform Concentration of Tracer Gases—The uniformity of the concentration of potential contaminants is first tested using a tracer gas to represent gaseous effluents. The tracer is injected downstream of the fan outlets and at the junction downstream fan discharges meet. The acceptance criteria are that 1) the COV of the measured tracer gas concentration is ≤20% across the center two-thirds of the sampling plane and 2) at no point in the sampling plane does the concentration vary from the mean by >30%. Our test results show that 1) the COV of the measured tracer gas concentration is < 2.9% for all test conditions and 2) at no point in the sampling plane does the concentration vary from the mean by >6.5%. 4. Uniform Concentration of Tracer Particles—Tracer particles of 10-μm aerodynamic diameter are used for the second demonstration of concentration uniformity. The acceptance criterion is that the COV of particle concentration is ≤ 20% across the center two thirds of the sampling plane. Our test results indicate that the COV of particle concentration is <9.9% across the center two-thirds of the sampling plane among all testing conditions. Thus, the reconfigured 3410 Building Filtered Exhaust Stack was determined to meet the qualification criteria given in the ANSI/HPS N13.1-2011 standard. Changes to the system configuration or operations outside the bounds described in this report (e.g., exhaust stack velocity changes, relocation of sampling probe, and addition of fans) may require re-testing or re-evaluation to determine compliance.« less
Guided filter and convolutional network based tracking for infrared dim moving target
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Qin, Hanlin; Rong, Shenghui; Zhao, Dong; Du, Juan
2017-09-01
The dim moving target usually submerges in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio. A tracking algorithm that integrates the Guided Image Filter (GIF) and the Convolutional neural network (CNN) into the particle filter framework is presented to cope with the uncertainty of dim targets. First, the initial target template is treated as a guidance to filter incoming templates depending on similarities between the guidance and candidate templates. The GIF algorithm utilizes the structure in the guidance and performs as an edge-preserving smoothing operator. Therefore, the guidance helps to preserve the detail of valuable templates and makes inaccurate ones blurry, alleviating the tracking deviation effectively. Besides, the two-layer CNN method is adopted to obtain a powerful appearance representation. Subsequently, a Bayesian classifier is trained with these discriminative yet strong features. Moreover, an adaptive learning factor is introduced to prevent the update of classifier's parameters when a target undergoes sever background. At last, classifier responses of particles are utilized to generate particle importance weights and a re-sample procedure preserves samples according to the weight. In the predication stage, a 2-order transition model considers the target velocity to estimate current position. Experimental results demonstrate that the presented algorithm outperforms several relative algorithms in the accuracy.
NASA Astrophysics Data System (ADS)
Lopez Maurino, Sebastian; Badano, Aldo; Cunningham, Ian A.; Karim, Karim S.
2016-03-01
We propose a new design of a stacked three-layer flat-panel x-ray detector for dual-energy (DE) imaging. Each layer consists of its own scintillator of individual thickness and an underlying thin-film-transistor-based flat-panel. Three images are obtained simultaneously in the detector during the same x-ray exposure, thereby eliminating any motion artifacts. The detector operation is two-fold: a conventional radiography image can be obtained by combining all three layers' images, while a DE subtraction image can be obtained from the front and back layers' images, where the middle layer acts as a mid-filter that helps achieve spectral separation. We proceed to optimize the detector parameters for two sample imaging tasks that could particularly benefit from this new detector by obtaining the best possible signal to noise ratio per root entrance exposure using well-established theoretical models adapted to fit our new design. These results are compared to a conventional DE temporal subtraction detector and a single-shot DE subtraction detector with a copper mid-filter, both of which underwent the same theoretical optimization. The findings are then validated using advanced Monte Carlo simulations for all optimized detector setups. Given the performance expected from initial results and the recent decrease in price for digital x-ray detectors, the simplicity of the three-layer stacked imager approach appears promising to usher in a new generation of multi-spectral digital x-ray diagnostics.
Tarumi, Toshiyasu; Small, Gary W; Combs, Roger J; Kroutil, Robert T
2004-04-01
Finite impulse response (FIR) filters and finite impulse response matrix (FIRM) filters are evaluated for use in the detection of volatile organic compounds with wide spectral bands by direct analysis of interferogram data obtained from passive Fourier transform infrared (FT-IR) measurements. Short segments of filtered interferogram points are classified by support vector machines (SVMs) to implement the automated detection of heated plumes of the target analyte, ethanol. The interferograms employed in this study were acquired with a downward-looking passive FT-IR spectrometer mounted on a fixed-wing aircraft. Classifiers are trained with data collected on the ground and subsequently used for the airborne detection. The success of the automated detection depends on the effective removal of background contributions from the interferogram segments. Removing the background signature is complicated when the analyte spectral bands are broad because there is significant overlap between the interferogram representations of the analyte and background. Methods to implement the FIR and FIRM filters while excluding background contributions are explored in this work. When properly optimized, both filtering procedures provide satisfactory classification results for the airborne data. Missed detection rates of 8% or smaller for ethanol and false positive rates of at most 0.8% are realized. The optimization of filter design parameters, the starting interferogram point for filtering, and the length of the interferogram segments used in the pattern recognition is discussed.
Spatio-temporal filtering for determination of common mode error in regional GNSS networks
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Gruszczynski, Maciej; Figurski, Mariusz; Klos, Anna
2015-04-01
The spatial correlation between different stations for individual components in the regional GNSS networks seems to be significant. The mismodelling in satellite orbits, the Earth orientation parameters (EOP), largescale atmospheric effects or satellite antenna phase centre corrections can all cause the regionally correlated errors. This kind of GPS time series errors are referred to as common mode errors (CMEs). They are usually estimated with the regional spatial filtering, such as the "stacking". In this paper, we show the stacking approach for the set of ASG-EUPOS permanent stations, assuming that spatial distribution of the CME is uniform over the whole region of Poland (more than 600 km extent). The ASG-EUPOS is a multifunctional precise positioning system based on the reference network designed for Poland. We used a 5- year span time series (2008-2012) of daily solutions in the ITRF2008 from Bernese 5.0 processed by the Military University of Technology EPN Local Analysis Centre (MUT LAC). At the beginning of our analyses concerning spatial dependencies, the correlation coefficients between each pair of the stations in the GNSS network were calculated. This analysis shows that spatio-temporal behaviour of the GPS-derived time series is not purely random, but there is the evident uniform spatial response. In order to quantify the influence of filtering using CME, the norms L1 and L2 were determined. The values of these norms were calculated for the North, East and Up components twice: before performing the filtration and after stacking. The observed reduction of the L1 and L2 norms was up to 30% depending on the dimension of the network. However, the question how to define an optimal size of CME-analysed subnetwork remains unanswered in this research, due to the fact that our network is not extended enough.
Cebeci Maltaş, Derya; Kwok, Kaho; Wang, Ping; Taylor, Lynne S; Ben-Amotz, Dor
2013-06-01
Identifying pharmaceutical ingredients is a routine procedure required during industrial manufacturing. Here we show that a recently developed Raman compressive detection strategy can be employed to classify various widely used pharmaceutical materials using a hybrid supervised/unsupervised strategy in which only two ingredients are used for training and yet six other ingredients can also be distinguished. More specifically, our liquid crystal spatial light modulator (LC-SLM) based compressive detection instrument is trained using only the active ingredient, tadalafil, and the excipient, lactose, but is tested using these and various other excipients; microcrystalline cellulose, magnesium stearate, titanium (IV) oxide, talc, sodium lauryl sulfate and hydroxypropyl cellulose. Partial least squares discriminant analysis (PLS-DA) is used to generate the compressive detection filters necessary for fast chemical classification. Although the filters used in this study are trained on only lactose and tadalafil, we show that all the pharmaceutical ingredients mentioned above can be differentiated and classified using PLS-DA compressive detection filters with an accumulation time of 10ms per filter. Copyright © 2013 Elsevier B.V. All rights reserved.
Jong, Victor L; Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C
2014-12-01
The literature shows that classifiers perform differently across datasets and that correlations within datasets affect the performance of classifiers. The question that arises is whether the correlation structure within datasets differ significantly across diseases. In this study, we evaluated the homogeneity of correlation structures within and between datasets of six etiological disease categories; inflammatory, immune, infectious, degenerative, hereditary and acute myeloid leukemia (AML). We also assessed the effect of filtering; detection call and variance filtering on correlation structures. We downloaded microarray datasets from ArrayExpress for experiments meeting predefined criteria and ended up with 12 datasets for non-cancerous diseases and six for AML. The datasets were preprocessed by a common procedure incorporating platform-specific recommendations and the two filtering methods mentioned above. Homogeneity of correlation matrices between and within datasets of etiological diseases was assessed using the Box's M statistic on permuted samples. We found that correlation structures significantly differ between datasets of the same and/or different etiological disease categories and that variance filtering eliminates more uncorrelated probesets than detection call filtering and thus renders the data highly correlated.
Otto, N; Platz, S; Fink, T; Wutscherk, M; Menzel, U
2016-01-01
One key technology to eliminate organic micropollutants (OMP) from wastewater effluent is adsorption using powdered activated carbon (PAC). To avoid a discharge of highly loaded PAC particles into natural water bodies a separation stage has to be implemented. Commonly large settling tanks and flocculation filters with the application of coagulants and flocculation aids are used. In this study, a multi-hydrocyclone classifier with a downstream cloth filter has been investigated on a pilot plant as a space-saving alternative with no need for a dosing of chemical additives. To improve the separation, a coarser ground PAC type was compared to a standard PAC type with regard to elimination results of OMP as well as separation performance. With a PAC dosing rate of 20 mg/l an average of 64.7 wt% of the standard PAC and 79.5 wt% of the coarse-ground PAC could be separated in the hydrocyclone classifier. A total average separation efficiency of 93-97 wt% could be reached with a combination of both hydrocyclone classifier and cloth filter. Nonetheless, the OMP elimination of the coarse-ground PAC was not sufficient enough to compete with the standard PAC. Further research and development is necessary to find applicable coarse-grained PAC types with adequate OMP elimination capabilities.
Waleń, Tomasz; Chojnowski, Grzegorz; Gierski, Przemysław; Bujnicki, Janusz M.
2014-01-01
The understanding of folding and function of RNA molecules depends on the identification and classification of interactions between ribonucleotide residues. We developed a new method named ClaRNA for computational classification of contacts in RNA 3D structures. Unique features of the program are the ability to identify imperfect contacts and to process coarse-grained models. Each doublet of spatially close ribonucleotide residues in a query structure is compared to clusters of reference doublets obtained by analysis of a large number of experimentally determined RNA structures, and assigned a score that describes its similarity to one or more known types of contacts, including pairing, stacking, base–phosphate and base–ribose interactions. The accuracy of ClaRNA is 0.997 for canonical base pairs, 0.983 for non-canonical pairs and 0.961 for stacking interactions. The generalized squared correlation coefficient (GC2) for ClaRNA is 0.969 for canonical base pairs, 0.638 for non-canonical pairs and 0.824 for stacking interactions. The classifier can be easily extended to include new types of spatial relationships between pairs or larger assemblies of nucleotide residues. ClaRNA is freely available via a web server that includes an extensive set of tools for processing and visualizing structural information about RNA molecules. PMID:25159614
Analytical Tools for Cloudscope Ice Measurement
NASA Technical Reports Server (NTRS)
Arnott, W. Patrick
1998-01-01
The cloudscope is a ground or aircraft instrument for viewing ice crystals impacted on a sapphire window. It is essentially a simple optical microscope with an attached compact CCD video camera whose output is recorded on a Hi-8 mm video cassette recorder equipped with digital time and date recording capability. In aircraft operation the window is at a stagnation point of the flow so adiabatic compression heats the window to sublimate the ice crystals so that later impacting crystals can be imaged as well. A film heater is used for ground based operation to provide sublimation, and it can also be used to provide extra heat for aircraft operation. The compact video camera can be focused manually by the operator, and a beam splitter - miniature bulb combination provide illumination for night operation. Several shutter speeds are available to accommodate daytime illumination conditions by direct sunlight. The video images can be directly used to qualitatively assess the crystal content of cirrus clouds and contrails. Quantitative size spectra are obtained with the tools described in this report. Selected portions of the video images are digitized using a PCI bus frame grabber to form a short movie segment or stack using NIH (National Institute of Health) Image software with custom macros developed at DRI. The stack can be Fourier transform filtered with custom, easy to design filters to reduce most objectionable video artifacts. Particle quantification of each slice of the stack is performed using digital image analysis. Data recorded for each particle include particle number and centroid, frame number in the stack, particle area, perimeter, equivalent ellipse maximum and minimum radii, ellipse angle, and pixel number. Each valid particle in the stack is stamped with a unique number. This output can be used to obtain a semiquantitative appreciation of the crystal content. The particle information becomes the raw input for a subsequent program (FORTRAN) that synthesizes each slice and separates the new from the sublimating particles. The new particle information is used to generate quantitative particle concentration, area, and mass size spectra along with total concentration, solar extinction coefficient, and ice water content. This program directly creates output in html format for viewing with a web browser.
Laboratory Testing of Volcanic Gas Sampling Techniques
NASA Astrophysics Data System (ADS)
Kress, V. C.; Green, R.; Ortiz, M.; Delmelle, P.; Fischer, T.
2003-12-01
A series of laboratory experiments were performed designed to calibrate several commonly used methods for field measurement of volcanic gas composition. H2, CO2, SO2 and CHCl2F gases were mixed through carefully calibrated rotameters to form mixtures representative of the types of volcanic compositions encountered at Kilauea and Showa-Shinzan. Gas mixtures were passed through a horizontal furnace at 700oC to break down CHCl2F and form an equilibrium high-temperature mixture. With the exception of Giggenbach bottle samples, all gas sampling was performed adjacent to the furnace exit in order to roughly simulate the air-contaminated samples encountered in Nature. Giggenbach bottle samples were taken from just beyond the hot-spot 10cm down the furnace tube to minimize atmospheric contamination. Alkali-trap measurements were performed by passing gases over or bubbling gases through 6N KOH, NaOH or LiOH solution for 10 minutes. Results were highly variable with errors in measured S/Cl varying from +1600% to -19%. In general reduced Kilauea compositions showed smaller errors than the more oxidized Showa-Shinzan compositions. Results were not resolvably different in experiments where gas was bubbled through the alkaline solution. In a second set of experiments, 25mm circles of Whatman 42 filter paper were impregnated with NaHCO3or KHCO3 alkaline solutions stabilized with glycerol. Some filters also included Alizarin (5.6-7.2) and neutral red (6.8-8.0) Ph indicator to provide a visual monitor of gas absorption. Filters were mounted in individual holders and used in stacks of 3. Durations were adjusted to maximize reaction in the first filter in the stack and minimize reaction in the final filter. Errors in filter pack measurements were smaller and more systematic than the alkali trap measurements. S/Cl was overestimated in oxidized gas mixtures and underestimated in reduced mixtures. Alkali-trap methods allow extended unattended monitoring of volcanic gasses, but our results suggest that they are poor recorders of gas composition. Filter pack methods are somewhat better, but are more difficult to interpret than previously recognized. We suggest several refinements to the filter-pack technique that can improve accuracy. Giggenbach bottles remain the best method for volcanic gas sampling, despite the inherent difficulty and danger of obtaining samples in active volcanic environments. Relative merits of different alkali solutions and indicators are discussed.
Design of an S band narrow-band bandpass BAW filter
NASA Astrophysics Data System (ADS)
Gao, Yang; Zhao, Kun-li; Han, Chao
2017-11-01
An S band narrowband bandpass filter BAW with center frequency 2.460 GHz, bandwidth 41MHz, band insertion loss - 1.154 dB, the passband ripple 0.9 dB, the out of band rejection about -42.5dB@2.385 GHz; -45.5dB@2.506 GHz was designed for potential UAV measurement and control applications. According to the design specifications, the design is as follows: each FBAR's stack was designed in BAW filter by using Mason model. Each FBAR's shape was designed with the method of apodization electrode. The layout of BAW filter was designed. The acoustic-electromagnetic cosimulation model was built to validate the performance of the designed BAW filter. The presented design procedure is a common one, and there are two characteristics: 1) an A and EM co-simulation method is used for the final BAW filter performance validation in the design stage, thus ensures over-optimistic designs by the bare 1D Mason model are found and rejected in time; 2) An in-house developed auto-layout method is used to get compact BAW filter layout, which simplifies iterative error-and-try work here and output necessary in-plane geometry information to the A and EM cosimulation model.
A cardiorespiratory classifier of voluntary and involuntary electrodermal activity
2010-01-01
Background Electrodermal reactions (EDRs) can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA) and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity) or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations). Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1) an EDR detector, 2) a respiratory filter and 3) a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state. PMID:20184746
The shape of velocity dispersion profiles and the dynamical state of galaxy clusters
NASA Astrophysics Data System (ADS)
Costa, A. P.; Ribeiro, A. L. B.; de Carvalho, R. R.
2018-01-01
Motivated by the existence of the relationship between the dynamical state of clusters and the shape of the velocity dispersion profiles (VDPs), we study the VDPs for Gaussian (G) and non-Gaussian (NG) systems for a subsample of clusters from the Yang catalogue. The groups cover a redshift interval of 0.03 ≤ z ≤ 0.1 with halo mass ≥1014 M⊙. We use a robust statistical method, Hellinger Distance, to classify the dynamical state of the systems according to their velocity distribution. The stacked VDP of each class, G and NG, is then determined using either Bright or Faint galaxies. The stacked VDP for G groups displays a central peak followed by a monotonically decreasing trend which indicates a predominance of radial orbits, with the Bright stacked VDP showing lower velocity dispersions in all radii. The distinct features we find in NG systems are manifested not only by the characteristic shape of VDP, with a depression in the central region, but also by a possible higher infall rate associated with galaxies in the Faint stacked VDP.
40 CFR 60.2165 - What monitoring equipment must I install and what parameters must I monitor?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a bag leak detection system as specified in paragraphs (b)(1) through (8) of this section. (1) You must install and operate a bag leak detection system for each exhaust stack of the fabric filter. (2) Each bag leak detection system must be installed, operated, calibrated, and maintained in a manner...
40 CFR 62.14690 - What monitoring equipment must I install and what parameters must I monitor?
Code of Federal Regulations, 2010 CFR
2010-07-01
... subpart, you must install, calibrate, maintain, and continuously operate a bag leak detection system as... detection system for each exhaust stack of the fabric filter. (2) Each bag leak detection system must be... specifications and recommendations. (3) The bag leak detection system must be certified by the manufacturer to be...
40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.
Code of Federal Regulations, 2012 CFR
2012-07-01
... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...
40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.
Code of Federal Regulations, 2010 CFR
2010-07-01
... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...
40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.
Code of Federal Regulations, 2014 CFR
2014-07-01
... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...
40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.
Code of Federal Regulations, 2013 CFR
2013-07-01
... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...
40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.
Code of Federal Regulations, 2011 CFR
2011-07-01
... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...
Collaborative Information Filtering in Cooperative Communities.
ERIC Educational Resources Information Center
Okamoto, T.; Miyahara, K.
1998-01-01
The purpose of this study was to develop an information filtering system which collects, classifies, selects, and stores various kinds of information found through the Internet. A collaborative form of information gathering was examined and a model was built and implemented in the Internet information space. (AEF)
Applying six classifiers to airborne hyperspectral imagery for detecting giant reed
USDA-ARS?s Scientific Manuscript database
This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...
Rational engineering of nanoporous anodic alumina optical bandpass filters
NASA Astrophysics Data System (ADS)
Santos, Abel; Pereira, Taj; Law, Cheryl Suwen; Losic, Dusan
2016-08-01
Herein, we present a rationally designed advanced nanofabrication approach aiming at producing a new type of optical bandpass filters based on nanoporous anodic alumina photonic crystals. The photonic stop band of nanoporous anodic alumina (NAA) is engineered in depth by means of a pseudo-stepwise pulse anodisation (PSPA) approach consisting of pseudo-stepwise asymmetric current density pulses. This nanofabrication method makes it possible to tune the transmission bands of NAA at specific wavelengths and bandwidths, which can be broadly modified across the UV-visible-NIR spectrum through the anodisation period (i.e. time between consecutive pulses). First, we establish the effect of the anodisation period as a means of tuning the position and width of the transmission bands of NAA across the UV-visible-NIR spectrum. To this end, a set of nanoporous anodic alumina bandpass filters (NAA-BPFs) are produced with different anodisation periods, ranging from 500 to 1200 s, and their optical properties (i.e. characteristic transmission bands and interferometric colours) are systematically assessed. Then, we demonstrate that the rational combination of stacked NAA-BPFs consisting of layers of NAA produced with different PSPA periods can be readily used to create a set of unique and highly selective optical bandpass filters with characteristic transmission bands, the position, width and number of which can be precisely engineered by this rational anodisation approach. Finally, as a proof-of-concept, we demonstrate that the superposition of stacked NAA-BPFs produced with slight modifications of the anodisation period enables the fabrication of NAA-BPFs with unprecedented broad transmission bands across the UV-visible-NIR spectrum. The results obtained from our study constitute the first comprehensive rationale towards advanced NAA-BPFs with fully controllable photonic properties. These photonic crystal structures could become a promising alternative to traditional optical bandpass filters based on glass and plastic.Herein, we present a rationally designed advanced nanofabrication approach aiming at producing a new type of optical bandpass filters based on nanoporous anodic alumina photonic crystals. The photonic stop band of nanoporous anodic alumina (NAA) is engineered in depth by means of a pseudo-stepwise pulse anodisation (PSPA) approach consisting of pseudo-stepwise asymmetric current density pulses. This nanofabrication method makes it possible to tune the transmission bands of NAA at specific wavelengths and bandwidths, which can be broadly modified across the UV-visible-NIR spectrum through the anodisation period (i.e. time between consecutive pulses). First, we establish the effect of the anodisation period as a means of tuning the position and width of the transmission bands of NAA across the UV-visible-NIR spectrum. To this end, a set of nanoporous anodic alumina bandpass filters (NAA-BPFs) are produced with different anodisation periods, ranging from 500 to 1200 s, and their optical properties (i.e. characteristic transmission bands and interferometric colours) are systematically assessed. Then, we demonstrate that the rational combination of stacked NAA-BPFs consisting of layers of NAA produced with different PSPA periods can be readily used to create a set of unique and highly selective optical bandpass filters with characteristic transmission bands, the position, width and number of which can be precisely engineered by this rational anodisation approach. Finally, as a proof-of-concept, we demonstrate that the superposition of stacked NAA-BPFs produced with slight modifications of the anodisation period enables the fabrication of NAA-BPFs with unprecedented broad transmission bands across the UV-visible-NIR spectrum. The results obtained from our study constitute the first comprehensive rationale towards advanced NAA-BPFs with fully controllable photonic properties. These photonic crystal structures could become a promising alternative to traditional optical bandpass filters based on glass and plastic. Electronic supplementary information (ESI) available: An example demonstrating the effect of pore widening on the position and width of the transmission band of a NAA-BPF and a comprehensive table summarising the position and FWHM of the different bands of the NAA-BPFs produced in this study. See DOI: 10.1039/c6nr03490j
Leske, David A; Hatt, Sarah R; Liebermann, Laura; Holmes, Jonathan M
2016-02-01
We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as "success," "partial success," or "failure" based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis ( P < 0.0001 for all comparisons). The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software.
Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.
2016-01-01
Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524
Li, Jiangeng; Su, Lei; Pang, Zenan
2015-12-01
Feature selection techniques have been widely applied to tumor gene expression data analysis in recent years. A filter feature selection method named marginal Fisher analysis score (MFA score) which is based on graph embedding has been proposed, and it has been widely used mainly because it is superior to Fisher score. Considering the heavy redundancy in gene expression data, we proposed a new filter feature selection technique in this paper. It is named MFA score+ and is based on MFA score and redundancy excluding. We applied it to an artificial dataset and eight tumor gene expression datasets to select important features and then used support vector machine as the classifier to classify the samples. Compared with MFA score, t test and Fisher score, it achieved higher classification accuracy.
Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.
ERIC Educational Resources Information Center
Mostafa, J.; Lam, W.
2000-01-01
Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…
DEVELOPMENT OF A LAMINATED DISK FOR THE SPIN TEK ROTARY MICROFILTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, D.
2011-06-03
Funded by the Department of Energy Office of Environmental Management, EM-31, the Savannah River National Laboratory (SRNL) partnered with SpinTek Filtration{trademark} to develop a filter disk that would withstand a reverse pressure or flow during operation of the rotary microfilter. The ability to withstand a reverse pressure and flow eliminates a potential accident scenario that could have resulted in damage to the filter membranes. While the original welded filter disks have been shown to withstand and reverse pressure/flow in the static condition, the filter disk design discussed in this report will allow a reverse pressure/flow while the disks are rotating.more » In addition, the laminated disk increases the flexibility during filter startup and cleaning operations. The new filter disk developed by SRNL and SpinTek is manufactured with a more open structure significantly reducing internal flow restrictions in the disk. The prototype was tested at the University of Maryland and demonstrated to withstand the reverse pressure due to the centrifugal action of the rotary filter. The tested water flux of the disk was demonstrated to be 1.34 gpm in a single disk test. By comparison, the water flux of the current disk was 0.49 gpm per disk during a 25 disk test. The disk also demonstrated rejection of solids by filtering a 5 wt % Strontium Carbonate slurry with a filtrate clarity of less the 1.4 Nephelometric Turbidity Units (NTU) throughout the two hour test. The Savannah River National Laboratory (SRNL) has been working with SpinTek Filtration{trademark} to adapt the rotary microfilter for radioactive service in the Department of Energy (DOE) Complex. One potential weakness is the loose nature of the membrane on the filter disks. The current disk is constructed by welding the membrane at the outer edge of the disk. The seal for the center of the membrane is accomplished by an o-ring in compression for the assembled stack. The remainder of the membrane is free floating on the disk. This construction requires that a positive pressure be applied to the rotary filter tank to prevent the membrane from rising from the disk structure and potentially contacting the filter turbulence promoter. In addition, one accident scenario is a reverse flow through the filtrate line due to mis-alignment of valves resulting in the membrane rising from the disk structure. The structural integrity of the current disk has been investigated, and shown that the disk can withstand a significant reverse pressure in a static condition. However, the disk will likely incur damage if the filter stack is rotated during a reverse pressure. The development of a laminated disk would have several significant benefits for the operation of the rotary filter including the prevention of a compromise in filter disk integrity during a reverse flow accident, increasing operational flexibility, and increasing the self cleaning ability of the filter. A laminated disk would allow the filter rotor operation prior to a positive pressure in the filter tank. This would prevent the initial dead-head of the filter and prevent the resulting initial filter cake buildup. The laminated disk would allow rotor operation with cleaning fluid, eliminating the need for a recirculation pump. Additionally, a laminated disk would allow a reverse flow of fluid through the membrane pores removing trapped particles.« less
Learnable despeckling framework for optical coherence tomography images
NASA Astrophysics Data System (ADS)
Adabi, Saba; Rashedi, Elaheh; Clayton, Anne; Mohebbi-Kalkhoran, Hamed; Chen, Xue-wen; Conforto, Silvia; Nasiriavanaki, Mohammadreza
2018-01-01
Optical coherence tomography (OCT) is a prevalent, interferometric, high-resolution imaging method with broad biomedical applications. Nonetheless, OCT images suffer from an artifact called speckle, which degrades the image quality. Digital filters offer an opportunity for image improvement in clinical OCT devices, where hardware modification to enhance images is expensive. To reduce speckle, a wide variety of digital filters have been proposed; selecting the most appropriate filter for an OCT image/image set is a challenging decision, especially in dermatology applications of OCT where a different variety of tissues are imaged. To tackle this challenge, we propose an expandable learnable despeckling framework, we call LDF. LDF decides which speckle reduction algorithm is most effective on a given image by learning a figure of merit (FOM) as a single quantitative image assessment measure. LDF is learnable, which means when implemented on an OCT machine, each given image/image set is retrained and its performance is improved. Also, LDF is expandable, meaning that any despeckling algorithm can easily be added to it. The architecture of LDF includes two main parts: (i) an autoencoder neural network and (ii) filter classifier. The autoencoder learns the FOM based on several quality assessment measures obtained from the OCT image including signal-to-noise ratio, contrast-to-noise ratio, equivalent number of looks, edge preservation index, and mean structural similarity index. Subsequently, the filter classifier identifies the most efficient filter from the following categories: (a) sliding window filters including median, mean, and symmetric nearest neighborhood, (b) adaptive statistical-based filters including Wiener, homomorphic Lee, and Kuwahara, and (c) edge preserved patch or pixel correlation-based filters including nonlocal mean, total variation, and block matching three-dimensional filtering.
Three-Category Classification of Magnetic Resonance Hearing Loss Images Based on Deep Autoencoder.
Jia, Wenjuan; Yang, Ming; Wang, Shui-Hua
2017-09-11
Hearing loss, a partial or total inability to hear, is known as hearing impairment. Untreated hearing loss can have a bad effect on normal social communication, and it can cause psychological problems in patients. Therefore, we design a three-category classification system to detect the specific category of hearing loss, which is beneficial to be treated in time for patients. Before the training and test stages, we use the technology of data augmentation to produce a balanced dataset. Then we use deep autoencoder neural network to classify the magnetic resonance brain images. In the stage of deep autoencoder, we use stacked sparse autoencoder to generate visual features, and softmax layer to classify the different brain images into three categories of hearing loss. Our method can obtain good experimental results. The overall accuracy of our method is 99.5%, and the time consuming is 0.078 s per brain image. Our proposed method based on stacked sparse autoencoder works well in classification of hearing loss images. The overall accuracy of our method is 4% higher than the best of state-of-the-art approaches.
Tunable graphene-based mid-infrared plasmonic multispectral and narrow band-stop filter
NASA Astrophysics Data System (ADS)
Wang, Xianjun; Meng, Hongyun; Liu, Shuai; Deng, Shuying; Jiao, Tao; Wei, Zhongchao; Wang, Faqiang; Tan, Chunhua; Huang, Xuguang
2018-04-01
In this paper, we numerically investigate the band-stop properties of single- or few-layers doped graphene ribbon arrays operating in the mid-infrared region by finite-difference time-domain method (FDTD). A perfect band-stop filter with extinction ratio (ER) ∼17 dB, 3 dB bandwidth ∼200 nm and the resonance notch located at 6.64 μm can be achieved. And desired working regions can be obtained by tuning the Fermi level (E f ) of the graphene ribbons and the geometrical parameters of the structure. Besides, by tuning the Fermi level of odd or even graphene ribbons with terminal gate voltage, we can achieve a dual-circuit switch with four states combinations of on-to-off. Furthermore, the multiple filter notches can be achieved by stacking few-layers structure, and the filter dips can be dynamically tuned to achieve the tunability and selective characteristics by tuning the Fermi-level of the graphene ribbons in the system. We believe that our proposal has the potential applications in selective filters and active plasmonic switching in the mid-infrared region.
Shrestha, Vivek Raj; Lee, Sang-Shin; Kim, Eun-Soo; Choi, Duk-Yong
2014-01-01
Nanostructure based color filtering has been considered an attractive replacement for current colorant pigmentation in the display technologies, in view of its increased efficiencies, ease of fabrication and eco-friendliness. For such structural filtering, iridescence relevant to its angular dependency, which poses a detrimental barrier to the practical development of high performance display and sensing devices, should be mitigated. We report on a non-iridescent transmissive structural color filter, fabricated in a large area of 76.2 × 25.4 mm2, taking advantage of a stack of three etalon resonators in dielectric films based on a high-index cavity in amorphous silicon. The proposed filter features a high transmission above 80%, a high excitation purity of 0.93 and non-iridescence over a range of 160°, exhibiting no significant change in the center wavelength, dominant wavelength and excitation purity, which implies no change in hue and saturation of the output color. The proposed structure may find its potential applications to large-scale display and imaging sensor systems. PMID:24815530
NASA Astrophysics Data System (ADS)
Salem, Mohamed Shaker; Abdelaleem, Asmaa Mohamed; El-Gamal, Abear Abdullah; Amin, Mohamed
2017-01-01
One-dimensional silicon-based photonic crystals are formed by the electrochemical anodization of silicon substrates in hydrofluoric acid-based solution using an appropriate current density profile. In order to create a multi-band optical filter, two fabrication approaches are compared and discussed. The first approach utilizes a current profile composed of a linear combination of sinusoidal current waveforms having different frequencies. The individual frequency of the waveform maps to a characteristic stop band in the reflectance spectrum. The stopbands of the optical filter created by the second approach, on the other hand, are controlled by stacking multiple porous silicon rugate multilayers having different fabrication conditions. The morphology of the resulting optical filters is tuned by controlling the electrolyte composition and the type of the silicon substrate. The reduction of sidelobes arising from the interference in the multilayers is observed by applying an index matching current profile to the anodizing current waveform. In order to stabilize the resulting optical filters against natural oxidation, atomic layer deposition of silicon dioxide on the pore wall is employed.
40 CFR 60.2730 - What monitoring equipment must I install and what parameters must I monitor?
Code of Federal Regulations, 2010 CFR
2010-07-01
... continuously operate a bag leak detection system as specified in paragraphs (b)(1) through (8) of this section. (1) You must install and operate a bag leak detection system for each exhaust stack of the fabric filter. (2) Each bag leak detection system must be installed, operated, calibrated, and maintained in a...
NASA Astrophysics Data System (ADS)
Flewelling, Heather
2017-01-01
We present an overview of the first and second Pan-STARRS data release (DR1 and DR2), and how to use the Published Science Products Subsystem (PSPS) and the Pan-STARRS Science Interface (PSI) to access the images and the catalogs. The data will be available from the STScI MAST archive. The PSPS is an SQLServer database that can be queried via script or web interface. This database has relative photometry and astrometry and object associations, making it easy to do searches across the entire sky as well as tools to generate lightcurves of individual objects as a function of time. Both releases of data use the 3pi survey, which has 5 filters (g,r,i,z,y), roughly 60 epochs (12 per filter) and covers 3/4 of the sky and everything north of -30 degrees declination. The first release of data (DR1) will contain stack images, mean attribute catalogs and static sky catalogs based off of the stacks. The second release of data (DR2) will contain the time domain data. For the images, this will include single exposures that have been detrended and warped. For the catalogs, this will include catalogs of all exposures as well as forced photometry.
Accessing the diffracted wavefield by coherent subtraction
NASA Astrophysics Data System (ADS)
Schwarz, Benjamin; Gajewski, Dirk
2017-10-01
Diffractions have unique properties which are still rarely exploited in common practice. Aside from containing subwavelength information on the scattering geometry or indicating small-scale structural complexity, they provide superior illumination compared to reflections. While diffraction occurs arguably on all scales and in most realistic media, the respective signatures typically have low amplitudes and are likely to be masked by more prominent wavefield components. It has been widely observed that automated stacking acts as a directional filter favouring the most coherent arrivals. In contrast to other works, which commonly aim at steering the summation operator towards fainter contributions, we utilize this directional selection to coherently approximate the most dominant arrivals and subtract them from the data. Supported by additional filter functions which can be derived from wave front attributes gained during the stacking procedure, this strategy allows for a fully data-driven recovery of faint diffractions and makes them accessible for further processing. A complex single-channel field data example recorded in the Aegean sea near Santorini illustrates that the diffracted background wavefield is surprisingly rich and despite the absence of a high channel count can still be detected and characterized, suggesting a variety of applications in industry and academia.
Results from Two Low Mass Cosmic Ray Experiments Flown on the HASP Platform
NASA Astrophysics Data System (ADS)
Fontenot, R. S.; Hollerman, W. A.; Tittsworth, M.; Fountain, W.; Christl, M.; Thibodaux, C.; Broussard, B. M.
2009-03-01
The High Altitude Student Payload (HASP) program is designed to carry twelve student experiments to an altitude of about 123,000 feet (˜37 km). In 2006, students participated in the first HASP launch to measure cosmic ray intensities using traditional film and absorbers. This 10 kg payload flew from Fort Sumner, New Mexico in early September 2006 and was a great success. In 2007, students participated in the second HASP flight to measure the cosmic ray intensity and flux using a traditional film and absorber stack with five layers of optically stimulated luminescent (OSL) dosimeters. Results from both payloads showed that the cosmic ray flux decreases as a function of payload depth. As the cosmic rays go through the stack, they deposit their energy in the payload material. Determining cosmic ray flux is a tedious task. It involves digitizing the film and determining the real cosmic ray density. For the first HASP payload, students used a program known as GlobalLab to count particles. For the second payload, the students decided to use a combination of the GREYCStoration image regularization algorithm, an embossing filter, and a depth-merging filter to reconstruct the paths of the cosmic rays.
NASA Astrophysics Data System (ADS)
Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.
2016-05-01
Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.
Detection of circuit-board components with an adaptive multiclass correlation filter
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Kober, Vitaly
2008-08-01
A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.
Wan, Xiaoqing; Zhao, Chunhui
2017-06-01
As a competitive machine learning algorithm, the stacked sparse autoencoder (SSA) has achieved outstanding popularity in exploiting high-level features for classification of hyperspectral images (HSIs). In general, in the SSA architecture, the nodes between adjacent layers are fully connected and need to be iteratively fine-tuned during the pretraining stage; however, the nodes of previous layers further away may be less likely to have a dense correlation to the given node of subsequent layers. Therefore, to reduce the classification error and increase the learning rate, this paper proposes the general framework of locally connected SSA; that is, the biologically inspired local receptive field (LRF) constrained SSA architecture is employed to simultaneously characterize the local correlations of spectral features and extract high-level feature representations of hyperspectral data. In addition, the appropriate receptive field constraint is concurrently updated by measuring the spatial distances from the neighbor nodes to the corresponding node. Finally, the efficient random forest classifier is cascaded to the last hidden layer of the SSA architecture as a benchmark classifier. Experimental results on two real HSI datasets demonstrate that the proposed hierarchical LRF constrained stacked sparse autoencoder and random forest (SSARF) provides encouraging results with respect to other contrastive methods, for instance, the improvements of overall accuracy in a range of 0.72%-10.87% for the Indian Pines dataset and 0.74%-7.90% for the Kennedy Space Center dataset; moreover, it generates lower running time compared with the result provided by similar SSARF based methodology.
NASA Astrophysics Data System (ADS)
Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.
2015-12-01
The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
NASA Astrophysics Data System (ADS)
Brakensiek, Nickolas L.; Kidd, Brian; Mesawich, Michael; Stevens, Don, Jr.; Gotlinsky, Barry
2003-06-01
A design of experiment (DOE) was implemented to show the effects of various point of use filters on the coat process. The DOE takes into account the filter media, pore size, and pumping means, such as dispense pressure, time, and spin speed. The coating was executed on a TEL Mark 8 coat track, with an IDI M450 pump, and PALL 16 stack Falcon filters. A KLA 2112 set at 0.69 μm pixel size was used to scan the wafers to detect and identify the defects. The process found for DUV42P to maintain a low defect coating irrespective of the filter or pore size is a high start pressure, low end pressure, low dispense time, and high dispense speed. The IDI M450 pump has the capability to compensate for bubble type defects by venting the defects out of the filter before the defects are in the dispense line and the variable dispense rate allows the material in the dispense line to slow down at the end of dispense and not create microbubbles in the dispense line or tip. Also the differential pressure sensor will alarm if the pressure differential across the filter increases over a user-determined setpoint. The pleat design allows more surface area in the same footprint to reduce the differential pressure across the filter and transport defects to the vent tube. The correct low defect coating process will maximize the advantage of reducing filter pore size or changing the filter media.
Classification of Hyperspectral Data Based on Guided Filtering and Random Forest
NASA Astrophysics Data System (ADS)
Ma, H.; Feng, W.; Cao, X.; Wang, L.
2017-09-01
Hyperspectral images usually consist of more than one hundred spectral bands, which have potentials to provide rich spatial and spectral information. However, the application of hyperspectral data is still challengeable due to "the curse of dimensionality". In this context, many techniques, which aim to make full use of both the spatial and spectral information, are investigated. In order to preserve the geometrical information, meanwhile, with less spectral bands, we propose a novel method, which combines principal components analysis (PCA), guided image filtering and the random forest classifier (RF). In detail, PCA is firstly employed to reduce the dimension of spectral bands. Secondly, the guided image filtering technique is introduced to smooth land object, meanwhile preserving the edge of objects. Finally, the features are fed into RF classifier. To illustrate the effectiveness of the method, we carry out experiments over the popular Indian Pines data set, which is collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. By comparing the proposed method with the method of only using PCA or guided image filter, we find that effect of the proposed method is better.
Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System
NASA Technical Reports Server (NTRS)
Lin, Tsung Han (Hank)
2011-01-01
JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.
Decoding grating orientation from microelectrode array recordings in monkey cortical area V4.
Manyakov, Nikolay V; Van Hulle, Marc M
2010-04-01
We propose an invasive brain-machine interface (BMI) that decodes the orientation of a visual grating from spike train recordings made with a 96 microelectrodes array chronically implanted into the prelunate gyrus (area V4) of a rhesus monkey. The orientation is decoded irrespective of the grating's spatial frequency. Since pyramidal cells are less prominent in visual areas, compared to (pre)motor areas, the recordings contain spikes with smaller amplitudes, compared to the noise level. Hence, rather than performing spike decoding, feature selection algorithms are applied to extract the required information for the decoder. Two types of feature selection procedures are compared, filter and wrapper. The wrapper is combined with a linear discriminant analysis classifier, and the filter is followed by a radial-basis function support vector machine classifier. In addition, since we have a multiclass classification problen, different methods for combining pairwise classifiers are compared.
LOFT complex in 1975 awaits renewed mission. Aerial view. Camera ...
LOFT complex in 1975 awaits renewed mission. Aerial view. Camera facing southwesterly. Left to right: stack, entry building (TAN-624), door shroud, duct shroud and filter hatches, dome (painted white), pre-amp building, equipment and piping building, shielded control room (TAN-630), airplane hangar (TAN-629). Date: 1975. INEEL negative no. 75-3690 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Diffraction Seismic Imaging of the Chalk Group Reservoir Rocks
NASA Astrophysics Data System (ADS)
Montazeri, M.; Fomel, S.; Nielsen, L.
2016-12-01
In this study we investigate seismic diffracted waves instead of seismic reflected waves, which are usually much stronger and carry most of the information regarding subsurface structures. The goal of this study is to improve imaging of small subsurface features such as faults and fractures. Moreover, we focus on the Chalk Group, which contains important groundwater resources onshore and oil and gas reservoirs in the Danish sector of the North Sea. Finding optimum seismic velocity models for the Chalk Group and estimating high-quality stacked sections with conventional processing methods are challenging tasks. Here, we try to filter out as much as possible of undesired arrivals before stacking the seismic data. Further, a plane-wave destruction method is applied on the seismic stack in order to dampen the reflection events and thereby enhance the visibility of the diffraction events. After this initial processing, we estimate the optimum migration velocity using diffraction events in order to obtain a better resolution stack. The results from this study demonstrate how diffraction imaging can be used as an additional tool for improving the images of small-scale features in the Chalk Group reservoir, in particular faults and fractures. Moreover, we discuss the potential of applying this approach in future studies focused on such reservoirs.
Brecko, Jonathan; Mathys, Aurore; Dekoninck, Wouter; Leponce, Maurice; VandenSpiegel, Didier; Semal, Patrick
2014-01-01
Abstract In this manuscript we present a focus stacking system, composed of commercial photographic equipment. The system is inexpensive compared to high-end commercial focus stacking solutions. We tested this system and compared the results with several different software packages (CombineZP, Auto-Montage, Helicon Focus and Zerene Stacker). We tested our final stacked picture with a picture obtained from two high-end focus stacking solutions: a Leica MZ16A with DFC500 and a Leica Z6APO with DFC290. Zerene Stacker and Helicon Focus both provided satisfactory results. However, Zerene Stacker gives the user more possibilities in terms of control of the software, batch processing and retouching. The outcome of the test on high-end solutions demonstrates that our approach performs better in several ways. The resolution of the tested extended focus pictures is much higher than those from the Leica systems. The flash lighting inside the Ikea closet creates an evenly illuminated picture, without struggling with filters, diffusers, etc. The largest benefit is the price of the set-up which is approximately € 3,000, which is 8 and 10 times less than the LeicaZ6APO and LeicaMZ16A set-up respectively. Overall, this enables institutions to purchase multiple solutions or to start digitising the type collection on a large scale even with a small budget. PMID:25589866
Brecko, Jonathan; Mathys, Aurore; Dekoninck, Wouter; Leponce, Maurice; VandenSpiegel, Didier; Semal, Patrick
2014-01-01
In this manuscript we present a focus stacking system, composed of commercial photographic equipment. The system is inexpensive compared to high-end commercial focus stacking solutions. We tested this system and compared the results with several different software packages (CombineZP, Auto-Montage, Helicon Focus and Zerene Stacker). We tested our final stacked picture with a picture obtained from two high-end focus stacking solutions: a Leica MZ16A with DFC500 and a Leica Z6APO with DFC290. Zerene Stacker and Helicon Focus both provided satisfactory results. However, Zerene Stacker gives the user more possibilities in terms of control of the software, batch processing and retouching. The outcome of the test on high-end solutions demonstrates that our approach performs better in several ways. The resolution of the tested extended focus pictures is much higher than those from the Leica systems. The flash lighting inside the Ikea closet creates an evenly illuminated picture, without struggling with filters, diffusers, etc. The largest benefit is the price of the set-up which is approximately € 3,000, which is 8 and 10 times less than the LeicaZ6APO and LeicaMZ16A set-up respectively. Overall, this enables institutions to purchase multiple solutions or to start digitising the type collection on a large scale even with a small budget.
Compact "diode-based" multi-energy soft x-ray diagnostic for NSTX.
Tritz, K; Clayton, D J; Stutman, D; Finkenthal, M
2012-10-01
A novel and compact, diode-based, multi-energy soft x-ray (ME-SXR) diagnostic has been developed for the National Spherical Tokamak Experiment. The new edge ME-SXR system tested on NSTX consists of a set of vertically stacked diode arrays, each viewing the plasma tangentially through independent pinholes and filters providing an overlapping view of the plasma midplane which allows simultaneous SXR measurements with coarse sub-sampling of the x-ray spectrum. Using computed x-ray spectral emission data, combinations of filters can provide fast (>10 kHz) measurements of changes in the electron temperature and density profiles providing a method to "fill-in" the gaps of the multi-point Thomson scattering system.
Lin, Jiuning; Tong, Qing; Lei, Yu; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
An electrically tunable infrared (IR) filter based on a key cascaded liquid-crystal Fabry-Perot (C-LC-FP) working in the wavelength range of 3-5 μm is presented. The C-LC-FP is constructed by closely stacking two FP microcavities with different depths of 12 and 15 μm and fully filled by nematic LC materials. Through continuous wavelength selection of both microcavities, radiation with a high transmittance and narrow bandwidth can pass through the filter. According to the electrically controlled birefringence characteristics of nematic LC molecules, the transmission spectrum can be shifted through applying a dual voltage signal over the C-LC-FP. Compared with common LC-FPs with a single microcavity, the C-LC-FP demonstrates better transmittance peak morphology and spectral selection performance. To be more specific, the number and the shifted scope of the IR transmission peak can be decreased and widened, respectively.
Two-Dimensional Planar Lightwave Circuit Integrated Spatial Filter Array and Method of Use Thereof
NASA Technical Reports Server (NTRS)
Dimov, Fedor (Inventor); Ai, Jun (Inventor)
2015-01-01
A large coherent two-dimensional (2D) spatial filter array (SFA), 30 by 30 or larger, is produced by coupling a 2D planar lightwave circuit (PLC) array with a pair of lenslet arrays at the input and output side. The 2D PLC array is produced by stacking a plurality of chips, each chip with a plural number of straight PLC waveguides. A pupil array is coated onto the focal plane of the lenslet array. The PLC waveguides are produced by deposition of a plural number of silica layers on the silicon wafer, followed by photolithography and reactive ion etching (RIE) processes. A plural number of mode filters are included in the silica-on-silicon waveguide such that the PLC waveguide is transparent to the fundamental mode but higher order modes are attenuated by 40 dB or more.
The Expansion of the Pulmonary Rib Cage during Breath Stacking Is Influenced by Age in Obese Women
Barcelar, Jacqueline de Melo; Aliverti, Andrea; Rattes, Catarina; Ximenes, Maria Eduarda; Campos, Shirley Lima; Brandão, Daniella Cunha; Fregonezi, Guilherme; de Andrade, Armèle Dornelas
2014-01-01
Objective To analyze in obese women the acute effects of the breath stacking technique on thoraco-abdominal expansion. Design and Methods Nineteen obese women (BMI≥30 kg/m2) were evaluated by anthropometry, spirometry and maximal respiratory muscle pressures and successively analyzed by Opto-Electronic Plethysmography and a Wright respirometer during quiet breathing and breath stacking maneuvers and compared with a group of 15 normal-weighted healthy women. The acute effects of the maneuvers were assessed in terms of total and compartmental chest wall volumes at baseline, end of the breath stacking maneuver and after the maneuver. Obese subjects were successively classified into two groups, accordingly to the response during the maneuver, group 1 = prevalent rib cage or group 2 = abdominal expansion. Results Age was significantly lower in group 1 than group 2. When considering the two obese groups, FEV1 was lower and minute ventilation was higher only in group 2 compared to controls group. During breath stacking, inspiratory capacity was significant differences in obese subjects with a smaller expansion of the pulmonary rib cage and a greater expansion of the abdomen compared to controls and also between groups 1 and 2. A significant inverse linear relationship was found between age and inspiratory capacity of the pulmonary rib cage but not of the abdomen. Conclusions In obese women the maximal expansion of the rib cage and abdomen is influenced by age and breath stacking maneuver could be a possible therapy for preventing respiratory complications. PMID:25372469
Color sensitive silicon photomultiplers with micro-cell level encoding for DOI PET detectors
NASA Astrophysics Data System (ADS)
Shimazoe, Kenji; Koyama, Akihiro; Takahashi, Hiroyuki; Ganka, Thomas; Iskra, Peter; Marquez Seco, Alicia; Schneider, Florian; Wiest, Florian
2017-11-01
There have been many studies on Depth Of Interaction (DOI) identification for high resolution Positron Emission Tomography (PET) systems, including those on phoswich detectors, double-sided readout, light sharing methods, and wavelength discrimination. The wavelength discrimination method utilizes the difference in wavelength of stacked scintillators and requires a color sensitive photodetector. Here, a new silicon photomultiplier (SiPM) coupled to a color filter (colorSiPM) was designed and fabricated for DOI detection. The fabricated colorSiPM has two anode readouts that are sensitive to blue and green color. The colorSiPM's response and DOI identification capability for stacked GAGG and LYSO crystals are characterized. The fabricated colorSiPM is sensitive enough to detect a peak of 662 keV from a 137 Cs source.
Language Classification using N-grams Accelerated by FPGA-based Bloom Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, A; Gokhale, M
N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Tajiri, Shinya; Tashiro, Mutsumi; Mizukami, Tomohiro; Tsukishima, Chihiro; Torikoshi, Masami; Kanai, Tatsuaki
2017-11-01
Carbon-ion therapy by layer-stacking irradiation for static targets has been practised in clinical treatments. In order to apply this technique to a moving target, disturbances of carbon-ion dose distributions due to respiratory motion have been studied based on the measurement using a respiratory motion phantom, and the margin estimation given by the square root of the summation Internal margin2+Setup margin2 has been assessed. We assessed the volume in which the variation in the ratio of the dose for a target moving due to respiration relative to the dose for a static target was within 5%. The margins were insufficient for use with layer-stacking irradiation of a moving target, and an additional margin was required. The lateral movement of a target converts to the range variation, as the thickness of the range compensator changes with the movement of the target. Although the additional margin changes according to the shape of the ridge filter, dose uniformity of 5% can be achieved for a spherical target 93 mm in diameter when the upward range variation is limited to 5 mm and the additional margin of 2.5 mm is applied in case of our ridge filter. Dose uniformity in a clinical target largely depends on the shape of the mini-peak as well as on the bolus shape. We have shown the relationship between range variation and dose uniformity. In actual therapy, the upper limit of target movement should be considered by assessing the bolus shape. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Thomas, Evan A; Barstow, Christina K; Rosa, Ghislaine; Majorin, Fiona; Clasen, Thomas
2013-01-01
Remotely reporting electronic sensors offer the potential to reduce bias in monitoring use of environmental health interventions. In the context of a five-month randomized controlled trial of household water filters and improved cookstoves in rural Rwanda, we collected data from intervention households on product compliance using (i) monthly surveys and direct observations by community health workers and environmental health officers, and (ii) sensor-equipped filters and cookstoves deployed for about two weeks in each household. The adoption rate interpreted by the sensors varied from the household reporting: 90.5% of households reported primarily using the intervention stove, while the sensors interpreted 73.2% use, and 96.5% of households reported using the intervention filter regularly, while the sensors interpreted no more than 90.2%. The sensor-collected data estimated use to be lower than conventionally collected data both for water filters (approximately 36% less water volume per day) and cookstoves (approximately 40% fewer uses per week). An evaluation of intrahousehold consistency in use suggests that households are not using their filters or stoves on an exclusive basis, and may be both drinking untreated water at times and using other stoves ("stove-stacking"). These results provide additional evidence that surveys and direct observation may exaggerate compliance with household-based environmental interventions.
DigiLens color sequential filtering for microdisplay-based projection applications
NASA Astrophysics Data System (ADS)
Sagan, Stephen F.; Smith, Ronald T.; Popovich, Milan M.
2000-10-01
Application Specific Integrated Filters (ASIFs), based on a unique holographic polymer dispersed liquid crystal (H-PDLC) material system offering high efficiency, fast switching and low power, are being developed for microdisplay based projection applications. A new photonics technology based H-PDLC materials combined with the ability to be electrically switched on and off offers a new approach to color sequential filtering of a white light source for microdisplay-based front and rear projection display applications. Switchable Bragg gratings created in the PDLC are fundamental building blocks. Combined with the well- defined spectral and angular characteristics of Bragg gratings, these selectable filters can provide a large color gamut and a dynamically adjustable white balance. These switchable Bragg gratings can be reflective or transmissive and in each case can be designed to operate in either additive or subtractive mode. The spectral characteristics of filters made from a stack of these Bragg gratings can be configured for a specific lamp spectrum to give high diffractive efficiency over the broad bandwidths required for an illumination system. When it is necessary to reduce the spectral bandwidth, it is possible to use the properties of reflection Bragg holograms to construct very narrow band high efficiency filters. The basic properties and key benefits of ASIFs in projection displays are reviewed.
Applying machine-learning techniques to Twitter data for automatic hazard-event classification.
NASA Astrophysics Data System (ADS)
Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.
2017-12-01
The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy is lower (83%) than the SVM-LR, since the algorithm needs a bigger training dataset to increase its accuracy. We used TensorFlow framework for applying CNN classifier to the same collection of tweets.In future we will modify both classifiers to work with other geo-hazards, use larger training datasets and apply them in real-time.
Shi, Z.; Tian, G.; Dong, S.; Xia, J.; He, H.; ,
2004-01-01
In a desert area, it is difficult to couple geophones with dry sands. A low and depression velocity layer can seriously attenuate high frequency components of seismic data. Therefore, resolution and signal-to-noise (S/N) ratio of seismic data deteriorate. To enhance resolution and S/N ratio of seismic data, we designed a coupling compensatory inverse filter by using the single trace seismic data from Seismic Wave Detect System (SWDS) and common receivers on equal conditions. We designed an attenuating compensatory inverse filter by using seismic data from a microseismogram log. At last, in order to convert a shot gather from common receivers to a shot gather from SWDS, we applied the coupling compensatory inverse filter to the shot gather from common receivers. And then we applied the attenuating compensatory inverse filter to the coupling stacked seismic data to increase its resolution and S/N ratio. The results show that the resolution of seismic data from common receivers after processing by using the coupling compensatory inverse filter is nearly comparable with that of data from SWDS. It is also found that the resolution and S/N ratio have been enhanced after the use of attenuating compensatory inverse filter. From the results, we can conclude that the filters can compensate high frequencies of seismic data. Moreover, the low frequency changed nearly.
New Details of the Human Corneal Limbus Revealed With Second Harmonic Generation Imaging.
Park, Choul Yong; Lee, Jimmy K; Zhang, Cheng; Chuck, Roy S
2015-09-01
To report novel findings of the human corneal limbus by using second harmonic generation (SHG) imaging. Corneal limbus was imaged by using an inverted two-photon excitation fluorescence microscope. Laser (Ti:Sapphire) was tuned at 850 nm for two-photon excitation. Backscatter signals of SHG and autofluorescence (AF) were collected through a 425/30-nm emission filter and a 525/45-emission filter, respectively. Multiple, consecutive, and overlapping image stacks (z-stack) were acquired for the corneal limbal area. Two novel collagen structures were revealed by SHG imaging at the limbus: an anterior limbal cribriform layer and presumed anchoring fibers. Anterior limbal cribriform layer is an intertwined reticular collagen architecture just beneath the limbal epithelial niche and is located between the peripheral cornea and Tenon's/scleral tissue. Autofluorescence imaging revealed high vascularity in this structure. Central to the anterior limbal cribriform layer, radial strands of collagen were found to connect the peripheral cornea to the limbus. These presumed anchoring fibers have both collagen and elastin and were found more extensively in the superficial layers than deep layer and were absent in very deep limbus near Schlemm's canal. By using SHG imaging, new details of the collagen architecture of human corneal limbal area were elucidated. High resolution images with volumetric analysis revealed two novel collagen structures.
NASA Astrophysics Data System (ADS)
Jeong, Jeong-Won; Kim, Tae-Seong; Shin, Dae-Chul; Do, Synho; Marmarelis, Vasilis Z.
2004-04-01
Recently it was shown that soft tissue can be differentiated with spectral unmixing and detection methods that utilize multi-band information obtained from a High-Resolution Ultrasonic Transmission Tomography (HUTT) system. In this study, we focus on tissue differentiation using the spectral target detection method based on Constrained Energy Minimization (CEM). We have developed a new tissue differentiation method called "CEM filter bank". Statistical inference on the output of each CEM filter of a filter bank is used to make a decision based on the maximum statistical significance rather than the magnitude of each CEM filter output. We validate this method through 3-D inter/intra-phantom soft tissue classification where target profiles obtained from an arbitrary single slice are used for differentiation in multiple tomographic slices. Also spectral coherence between target and object profiles of an identical tissue at different slices and phantoms is evaluated by conventional cross-correlation analysis. The performance of the proposed classifier is assessed using Receiver Operating Characteristic (ROC) analysis. Finally we apply our method to classify tiny structures inside a beef kidney such as Styrofoam balls (~1mm), chicken tissue (~5mm), and vessel-duct structures.
Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.
Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin
2017-04-01
As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.
NASA Astrophysics Data System (ADS)
Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi
2017-03-01
The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.
Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency
Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi
2013-01-01
Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687
Widely tunable Fabry-Perot filter based MWIR and LWIR microspectrometers
NASA Astrophysics Data System (ADS)
Ebermann, Martin; Neumann, Norbert; Hiller, Karla; Gittler, Elvira; Meinig, Marco; Kurth, Steffen
2012-06-01
As is generally known, miniature infrared spectrometers have great potential, e. g. for process and environmental analytics or in medical applications. Many efforts are being made to shrink conventional spectrometers, such as FTIR or grating based devices. A more rigorous approach for miniaturization is the use of MEMS technologies. Based on an established design for the MWIR new MEMS Fabry-Perot filters and sensors with expanded spectral ranges in the LWIR have been developed. The range 5.5 - 8 μm is particularly suited for the analysis of liquids. A dual-band sensor, which can be simultaneously tuned from 4 - 5 μm and 8 - 11 μm for the measurement of anesthetics and carbon dioxide has also been developed. A new material system is used to reduce internal stress in the reflector layer stack. Good results in terms of finesse (<= 60) and transmittance (<= 80 %) could be demonstrated. The hybrid integration of the filter in a pyroelectric detector results in very compact, robust and cost effective microspectrometers. FP filters with two moveable reflectors instead of only one reduce significantly the acceleration sensitivity and actuation voltage.
Design of Multilayer Dual-Band BPF and Diplexer with Zeros Implantation Using Suspended Stripline
NASA Astrophysics Data System (ADS)
Ho, Min-Hua; Hsu, Wei-Hong
In this paper, a dual-band bandpass filter (BPF) of multilayer suspended stripline (SSL) structure and an SSL diplexer composed of a low-pass filter (LPF) and a high-pass filter (HPF) are proposed. Bandstop structure creating transmission zeros is adopted in the BPF and diplexer, enhancing the signal selectivity of the former and increasing the isolation between the diverting ports of the latter. The dual-band BPF possesses two distinct bandpass structures and a bandstop circuit, all laid on different metallic layers. The metallic layers together with the supporting substrates are vertically stacked up to save the circuit dimension. The LPF and HPF used in the diplexer structure are designed by a quasi-lumped approach, which the LC lumped-elements circuit models are developed to analyze filters' characteristics and to emulate their frequency responses. Half-wavelength resonating slots are employed in the diplexer's structure to increase the isolation between its two signal diverting ports. Experiments are conducted to verify the multilayer dual-band BPF and the diplexer design. Agreements are observed between the simulation and the measurement.
Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery
Alam, Mohammad S.; Bhuiyan, Sharif M. A.
2014-01-01
In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840
Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis.
Duong, Bach Phi; Kim, Jong-Myon
2018-04-07
The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Traffic sign recognition by color segmentation and neural network
NASA Astrophysics Data System (ADS)
Surinwarangkoon, Thongchai; Nitsuwat, Supot; Moore, Elvin J.
2011-12-01
An algorithm is proposed for traffic sign detection and identification based on color filtering, color segmentation and neural networks. Traffic signs in Thailand are classified by color into four types: namely, prohibitory signs (red or blue), general warning signs (yellow) and construction area warning signs (amber). A color filtering method is first used to detect traffic signs and classify them by type. Then color segmentation methods adapted for each color type are used to extract inner features, e.g., arrows, bars etc. Finally, neural networks trained to recognize signs in each color type are used to identify any given traffic sign. Experiments show that the algorithm can improve the accuracy of traffic sign detection and recognition for the traffic signs used in Thailand.
NASA Astrophysics Data System (ADS)
Baratin, L. M.; Townend, J.; Chamberlain, C. J.; Savage, M. K.
2015-12-01
Characterising seismicity in the vicinity of the Alpine Fault, a major transform boundary late in its typical earthquake cycle, may provide constraints on the state of stress preceding a large earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault toward an anticipated major rupture. We work with a continuous seismic dataset collected between 2009 and 2012 from a network of short-period seismometers, the Southern Alps Microearthquake Borehole Array (SAMBA). Fourteen primary LFE templates have been used to scan the dataset using a matched-filter technique based on an iterative cross-correlation routine. This method allows the detection of similar signals and establishes LFE families with common hypocenter locations. The detections are then combined for each LFE family using phase-weighted stacking (Thurber et al., 2014) to produce a signal with the highest possible signal to noise ratio. We find this method to be successful in increasing the number of LFE detections by roughly 10% in comparison with linear stacking. Our next step is to manually pick polarities on first arrivals of the phase-weighted stacked signals and compute preliminary locations. We are working to estimate LFE focal mechanism parameters and refine the focal mechanism solutions using an amplitude ratio technique applied to the linear stacks. LFE focal mechanisms should provide new insight into the geometry and rheology of the Alpine Fault and the stress field prevailing in the central Southern Alps.
NASA Astrophysics Data System (ADS)
Shen, Yannan; Istock, André; Zaman, Anik; Woidt, Carsten; Hillmer, Hartmut
2018-05-01
Miniaturization of optical spectrometers can be achieved by Fabry-Pérot (FP) filter arrays. Each FP filter consists of two parallel highly reflecting mirrors and a resonance cavity in between. Originating from different individual cavity heights, each filter transmits a narrow spectral band (transmission line) with different wavelengths. Considering the fabrication efficiency, plasma enhanced chemical vapor deposition (PECVD) technology is applied to implement the high-optical-quality distributed Bragg reflectors (DBRs), while substrate conformal imprint lithography (one type of nanoimprint technology) is utilized to achieve the multiple cavities in just a single step. The FP filter array fabricated by nanoimprint combined with corresponding detector array builds a so-called "nanospectrometer". However, the silicon nitride and silicon dioxide stacks deposited by PECVD result in a limited stopband width of DBR (i.e., < 100 nm), which then limits the sensing range of filter arrays. However, an extension of the spectral range of filter arrays is desired and the topic of this investigation. In this work, multiple DBRs with different central wavelengths (λ c) are structured, deposited, and combined on a single substrate to enlarge the entire stopband. Cavity arrays are successfully aligned and imprinted over such terrace like surface in a single step. With this method, small chip size of filter arrays can be preserved, and the fabrication procedure of multiple resonance cavities is kept efficient as well. The detecting range of filter arrays is increased from roughly 50 nm with single DBR to 163 nm with three different DBRs.
Behrens, R; Ambrosi, P
2002-01-01
A few-channel spectrometer for mixed photon, electron and ion radiation fields has been developed. It consists of a front layer of an etched-track detector foil for detecting protons and ions, a stack of PMMA with thermoluminescent detectors at different depths for gaining spectral information about electrons, and a stack of metallic filters with increasing cut-off photon energies, interspersed with thermoluminescent detectors for gaining spectral information about photons. From the reading of the TL detectors the spectral fluence of the electrons (400 keV to 9 MeV) and photons (20 keV to 2 MeV) can be determined by an unfolding procedure. The spectrometer can be used in pulsed radiation fields with extremely high momentary values of the fluence rate. Design and calibration of the spectrometer are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela M.; Bianchi, Andrea G. C.; DeBianchi, Christina
We introduce a computational analysis workflow to access properties of solid objects using nondestructive imaging techniques that rely on X-ray imaging. The goal is to process and quantify structures from material science sample cross sections. The algorithms can differentiate the porous media (high density material) from the void (background, low density media) using a Boolean classifier, so that we can extract features, such as volume, surface area, granularity spectrum, porosity, among others. Our workflow, Quant-CT, leverages several algorithms from ImageJ, such as statistical region merging and 3D object counter. It also includes schemes for bilateral filtering that use a 3Dmore » kernel, for parallel processing of sub-stacks, and for handling over-segmentation using histogram similarities. The Quant-CT supports fast user interaction, providing the ability for the user to train the algorithm via subsamples to feed its core algorithms with automated parameterization. Quant-CT plugin is currently available for testing by personnel at the Advanced Light Source and Earth Sciences Divisions and Energy Frontier Research Center (EFRC), LBNL, as part of their research on porous materials. The goal is to understand the processes in fluid-rock systems for the geologic sequestration of CO2, and to develop technology for the safe storage of CO2 in deep subsurface rock formations. We describe our implementation, and demonstrate our plugin on porous material images. This paper targets end-users, with relevant information for developers to extend its current capabilities.« less
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
NASA Astrophysics Data System (ADS)
Wutsqa, D. U.; Marwah, M.
2017-06-01
In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy
NASA Astrophysics Data System (ADS)
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-01
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-06
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Optical monitoring of rugate filters
NASA Astrophysics Data System (ADS)
Lappschies, Marc; Görtz, Björn; Ristau, Detlev
2005-09-01
Rugate filters have a high potential for solving specific design problems in many applications of modern optics and lighting technology. However, the exact manufacture of these gradual layer systems is still a challenge which could not be solved completely until today. One of the prominent approaches for the production of rugate filters is based on independent quartz crystal devices measuring the rate of the different coating materials. As an alternative, optical broadband monitoring has been already qualified for controlling the deposition of complicated non quarterwave stacks. In the present study, promising results of this deposition control concept as a direct monitoring of rugate filters will be presented. In a first attempt, the continuous change of refractive indices in the graded layers was transformed to a set of discrete homogeneous sub-layers with thicknesses values of around 5 nm. These discrete layers are realized by defined mixtures of two materials. A data base for the dispersion behavior was created for the different mixing ratios and is employed for the production of such quasi-rugate filters. The optical monitor is operated in the routine mode determining the switching points of the layers. Selected examples will be presented for quasi rugate coatings produced by ion beam sputtering from a movable zone target. Different designs will be discussed considering production problems as well as achievable optical properties.
Nonlinear multilayers as optical limiters
NASA Astrophysics Data System (ADS)
Turner-Valle, Jennifer Anne
1998-10-01
In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.
Means for limiting and ameliorating electrode shorting
Van Konynenburg, Richard A.; Farmer, Joseph C.
1999-01-01
A fuse and filter arrangement for limiting and ameliorating electrode shorting in capacitive deionization water purification systems utilizing carbon aerogel, for example. This arrangement limits and ameliorates the effects of conducting particles or debonded carbon aerogel in shorting the electrodes of a system such as a capacitive deionization water purification system. This is important because of the small interelectrode spacing and the finite possibility of debonding or fragmentation of carbon aerogel in a large system. The fuse and filter arrangement electrically protect the entire system from shutting down if a single pair of electrodes is shorted and mechanically prevents a conducting particle from migrating through the electrode stack, shorting a series of electrode pairs in sequence. It also limits the amount of energy released in a shorting event. The arrangement consists of a set of circuit breakers or fuses with one fuse or breaker in the power line connected to one electrode of each electrode pair and a set of screens of filters in the water flow channels between each set of electrode pairs.
An Efficient Conflict Detection Algorithm for Packet Filters
NASA Astrophysics Data System (ADS)
Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung
Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.
NASA Astrophysics Data System (ADS)
Flewelling, Heather
2018-01-01
On December 19, 2016, Pan-STARRS released the stacked images, mean attributes catalogs, and static sky catalogs for the 3pi survey, in 5 filters (g,r,i,z,y), covering 3/4 of the sky, everything north of -30 in declination. This set of data is called Data Release 1 (DR1), and it is available to all at http://panstarrs.stsci.edu. It contains more than 10 billion objects, 3 billion of those objects have stack photometry. We give an update on the progress of the forthcoming Data Release (DR2) database, which will provide time domain catalogs and single exposures for the 3pi survey. This includes 3pi data taken between 2010 and 2014, covering approximately 60 epochs per patch of sky, and includes measurements detected in the single exposures as well as forced photometry measurements (photometry measured on single exposures using the positions from sources detected in the stacks). We also provide informations on futures releases (DR3 and beyond), which will contain the rest of the 3pi database (specifically, the data products related to difference imaging), as well as the data products for the Medium Deep (MD) survey.
Boston Community Information System 1987-1988 Experimental Test Results
1989-05-01
criteria which users can put in their filter lines and advertisers can target. The users largely regarded BCIS as an effective medium for advertisement ...financial service industries. BCIS would be effective for advertisement of: classified advertisements ; employment opportunities (as a job mart); books and...of ads that can be filtered for personal interests. I think this could be a very effective advertising method - possibly very profitable. Ads can be
Dashtban, M; Balafar, Mohammadali
2017-03-01
Gene selection is a demanding task for microarray data analysis. The diverse complexity of different cancers makes this issue still challenging. In this study, a novel evolutionary method based on genetic algorithms and artificial intelligence is proposed to identify predictive genes for cancer classification. A filter method was first applied to reduce the dimensionality of feature space followed by employing an integer-coded genetic algorithm with dynamic-length genotype, intelligent parameter settings, and modified operators. The algorithmic behaviors including convergence trends, mutation and crossover rate changes, and running time were studied, conceptually discussed, and shown to be coherent with literature findings. Two well-known filter methods, Laplacian and Fisher score, were examined considering similarities, the quality of selected genes, and their influences on the evolutionary approach. Several statistical tests concerning choice of classifier, choice of dataset, and choice of filter method were performed, and they revealed some significant differences between the performance of different classifiers and filter methods over datasets. The proposed method was benchmarked upon five popular high-dimensional cancer datasets; for each, top explored genes were reported. Comparing the experimental results with several state-of-the-art methods revealed that the proposed method outperforms previous methods in DLBCL dataset. Copyright © 2017 Elsevier Inc. All rights reserved.
DeepGene: an advanced cancer type classifier based on deep learning and somatic point mutations.
Yuan, Yuchen; Shi, Yi; Li, Changyang; Kim, Jinman; Cai, Weidong; Han, Zeguang; Feng, David Dagan
2016-12-23
With the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance. To address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy. Based on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.
Enhancement of TEM Data and Noise Characterization by Principal Component Analysis
2010-05-01
include simply thresholding a noise level and ignoring any signal below the chosen value ( Pasion and Oldenburg, 2001b), stacking, and median filters...to de-trend the data ( Pasion and Oldenburg, 2001a). To date, there has not been a concentrated research effort focused on separating the various...Negative values not displayed) 27 Magnetic soil at Kaho’olawe (and in general) exhibits a t−1 decay in TEM surveys ( Pasion et al., 2002). This signal
Lithographically-Scribed Planar Holographic Optical CDMA Devices and Systems
2007-02-15
operate with quite high refractive index contrast (order 0.5). Thin -filn filter devices are viewed as relatively low in chromatic dispersion. We have...stack consists of planar interfaces between materials of refractive index n, and n,. Let An = In2 - nil and n = (n, - n1)/2. The planar interfaces are... index ). It may be desirable to have a relatively large refractive index differential when diffractive elements are formed from cladding material at a
Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J.W.
1998-08-07
This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.
NASA Astrophysics Data System (ADS)
Ravkin, Ilya; Temov, Vladimir
1998-04-01
The detection and genetic analysis of fetal cells in maternal blood will permit noninvasive prenatal screening for genetic defects. Applied Imaging has developed and is currently evaluating a system for semiautomatic detection of fetal nucleated red blood cells on slides and acquisition of their DNA probe FISH images. The specimens are blood smears from pregnant women (9 - 16 weeks gestation) enriched for nucleated red blood cells (NRBC). The cells are identified by using labeled monoclonal antibodies directed to different types of hemoglobin chains (gamma, epsilon); the nuclei are stained with DAPI. The Applied Imaging system has been implemented with both Olympus BX and Nikon Eclipse series microscopes which were equipped with transmission and fluorescence optics. The system includes the following motorized components: stage, focus, transmission, and fluorescence filter wheels. A video camera with light integration (COHU 4910) permits low light imaging. The software capabilities include scanning, relocation, autofocusing, feature extraction, facilities for operator review, and data analysis. Detection of fetal NRBCs is achieved by employing a combination of brightfield and fluorescence images of nuclear and cytoplasmic markers. The brightfield and fluorescence images are all obtained with a single multi-bandpass dichroic mirror. A Z-stack of DNA probe FISH images is acquired by moving focus and switching excitation filters. This stack is combined to produce an enhanced image for presentation and spot counting.
Time Series of Images to Improve Tree Species Classification
NASA Astrophysics Data System (ADS)
Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.
2017-10-01
Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.
NASA Astrophysics Data System (ADS)
Wyer, P.; Zurek, B.
2017-12-01
Extensive additions to the Royal Dutch Meteorological Institute (KNMI) seismic monitoring network over recent years have yielded corresponding gains in detection of low magnitude seismicity induced by production of the Groningen gas field. A review of the weakest events in the seismic catalog demonstrates that waveforms from individual stations in the 30 x 35 km network area overlap sufficiently for normalized analytic envelopes to be constructively stacked without compensation for moveout, detection of individual station triggers or the need for more advanced approaches such as template matching. This observation opens the possibility of updating the historical catalog to current detection levels without having to implement more computationally expensive steps when reprocessing the legacy continuous data. A more consistent long term catalog would better constrain the frequency-size distribution (Gutenberg-Richter relationship) and provide a richer dataset for calibration of geomechanical and seismological models. To test the viability of a direct stacking approach, normalized waveform envelopes are partitioned by station into two discrete RMS stacks. Candidate seismic events are then identified as simultaneous STA/LTA triggers on both stacks. This partitioning has a minor impact on signal, but avoids the majority of false detections otherwise obtained on a single stack. Undesired detection of anthropogenic sources and earthquakes occurring outside the field can be further minimized by tuning the waveform frequency filters and trigger configuration. After minimal optimization, data from as few as 14 legacy stations are sufficient for robust automatic detection of known events approaching ML0 from the recent catalog. Ongoing work will determine residual false detection rates and whether previously unknown past events can be detected with sensitivities comparable to the modern KNMI catalog.
Multilayer coating of optical substrates by ion beam sputtering
NASA Astrophysics Data System (ADS)
Daniel, M. V.; Demmler, M.
2017-10-01
Ion beam sputtering is well established in research and industry, despite its relatively low deposition rates compared to electron beam evaporation. Typical applications are coatings of precision optics, like filters, mirrors and beam splitter. Anti-reflective or high-reflective multilayer stacks benefit from the high mobility of the sputtered particles on the substrate surface and the good mechanical characteristics of the layers. This work gives the basic route from single layer optimization of reactive ion beam sputtered Ta2O5 and SiO2 thin films towards complex multilayer stacks for high-reflective mirrors and anti-reflective coatings. Therefore films were deposited using different oxygen flow into the deposition chamber Afterwards, mechanical (density, stress, surface morphology, crystalline phases) and optical properties (reflectivity, absorption and refractive index) were characterized. These knowledge was used to deposit a multilayer coating for a high reflective mirror.
Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor.
Lenero-Bardallo, Juan Antonio; Bryn, D H; Hafliger, Philipp
2014-06-01
This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
The Use of Fuzzy Set Classification for Pattern Recognition of the Polygraph
1993-12-01
actual feature extraction was done, It was decided to use the K-nearest neighbor ( KNN ) the data was preprocessed. The electrocardiogram classifier in...showing heart pulse, and a low frequency not known beforehand, and the KNN classifier does not component showing blood volume. The derivative of...the characteristics of the conventional KNN these six derived signals were detrended and filtered, classification method is that it assigns each
NASA Astrophysics Data System (ADS)
Krippner, Wolfgang; Wagner, Felix; Bauer, Sebastian; Puente León, Fernando
2017-06-01
Using appropriately designed spectral filters allows to optically determine material abundances. While an infinite number of possibilities exist for determining spectral filters, we take advantage of using neural networks to derive spectral filters leading to precise estimations. To overcome some drawbacks that regularly influence the determination of material abundances using hyperspectral data, we incorporate the spectral variability of the raw materials into the training of the considered neural networks. As a main result, we successfully classify quantized material abundances optically. Thus, the main part of the high computational load, which belongs to the use of neural networks, is avoided. In addition, the derived material abundances become invariant against spatially varying illumination intensity as a remarkable benefit in comparison with spectral filters based on the Moore-Penrose pseudoinverse, for instance.
Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying
2011-01-01
Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706
Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis
Kim, Jong-Myon
2018-01-01
The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance. PMID:29642466
Esteban, Santiago; Rodríguez Tablado, Manuel; Peper, Francisco; Mahumud, Yamila S; Ricci, Ricardo I; Kopitowski, Karin; Terrasa, Sergio
2017-01-01
Precision medicine requires extremely large samples. Electronic health records (EHR) are thought to be a cost-effective source of data for that purpose. Phenotyping algorithms help reduce classification errors, making EHR a more reliable source of information for research. Four algorithm development strategies for classifying patients according to their diabetes status (diabetics; non-diabetics; inconclusive) were tested (one codes-only algorithm; one boolean algorithm, four statistical learning algorithms and six stacked generalization meta-learners). The best performing algorithms within each strategy were tested on the validation set. The stacked generalization algorithm yielded the highest Kappa coefficient value in the validation set (0.95 95% CI 0.91, 0.98). The implementation of these algorithms allows for the exploitation of data from thousands of patients accurately, greatly reducing the costs of constructing retrospective cohorts for research.
A stacking ensemble learning framework for annual river ice breakup dates
NASA Astrophysics Data System (ADS)
Sun, Wei; Trevor, Bernard
2018-06-01
River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.
Detection and analysis of microseismic events using a Matched Filtering Algorithm (MFA)
NASA Astrophysics Data System (ADS)
Caffagni, Enrico; Eaton, David W.; Jones, Joshua P.; van der Baan, Mirko
2016-07-01
A new Matched Filtering Algorithm (MFA) is proposed for detecting and analysing microseismic events recorded by downhole monitoring of hydraulic fracturing. This method requires a set of well-located template (`parent') events, which are obtained using conventional microseismic processing and selected on the basis of high signal-to-noise (S/N) ratio and representative spatial distribution of the recorded microseismicity. Detection and extraction of `child' events are based on stacked, multichannel cross-correlation of the continuous waveform data, using the parent events as reference signals. The location of a child event relative to its parent is determined using an automated process, by rotation of the multicomponent waveforms into the ray-centred co-ordinates of the parent and maximizing the energy of the stacked amplitude envelope within a search volume around the parent's hypocentre. After correction for geometrical spreading and attenuation, the relative magnitude of the child event is obtained automatically using the ratio of stacked envelope peak with respect to its parent. Since only a small number of parent events require interactive analysis such as picking P- and S-wave arrivals, the MFA approach offers the potential for significant reduction in effort for downhole microseismic processing. Our algorithm also facilitates the analysis of single-phase child events, that is, microseismic events for which only one of the S- or P-wave arrivals is evident due to unfavourable S/N conditions. A real-data example using microseismic monitoring data from four stages of an open-hole slickwater hydraulic fracture treatment in western Canada demonstrates that a sparse set of parents (in this case, 4.6 per cent of the originally located events) yields a significant (more than fourfold increase) in the number of located events compared with the original catalogue. Moreover, analysis of the new MFA catalogue suggests that this approach leads to more robust interpretation of the induced microseismicity and novel insights into dynamic rupture processes based on the average temporal (foreshock-aftershock) relationship of child events to parents.
NASA Astrophysics Data System (ADS)
Setyan, Ari; Patrick, Michael; Wang, Jing
2017-10-01
A field campaign has been performed in two municipal solid waste incineration (MSWI) plants in Switzerland, at Hinwil (ZH) and Giubiasco (TI). The aim was to measure airborne pollutants at different locations of the abatement systems (including those released from the stacks into the atmosphere) and at a near-field (∼1 km) downwind site, in order to assess the efficiency of the abatement systems and the environmental impact of these plants. During this study, we measured the particle number concentration with a condensation particle counter (CPC), and the size distribution with a scanning mobility particle sizer (SMPS) and an aerodynamic particle sizer (APS). We also sampled particles on filters for subsequent analyses of the morphology, size and elemental composition with a scanning electron microscope coupled to an energy dispersive X-ray spectroscope (SEM/EDX), and of water soluble ions by ion chromatography (IC). Finally, volatile organic compounds (VOCs) were sampled on adsorbing cartridges and analyzed by thermal desorption-gas chromatography/mass spectrometry (TD-GC/MS), and a portable gas analyzer was used to monitor NO, SO2, CO, CO2, and O2. The particle concentration decreased significantly at two locations of the plants: at the electrostatic precipitator and the bag-house filter. The particle concentrations measured at the stacks were very low (<100 #/cm3), stressing the efficiency of the abatement system of the two plants. At Hinwil, particles sampled at the stack were mainly constituted of NaCl and KCl, two salts known to be involved in the corrosion process in incinerators. At Giubiasco, no significant differences were observed for the morphology and chemical composition of the particles collected in the ambient background and at the downwind site, suggesting that the incineration plant released very limited amounts of particles to the surrounding areas.
Palmprint authentication using multiple classifiers
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Zhang, David
2004-08-01
This paper investigates the performance improvement for palmprint authentication using multiple classifiers. The proposed methods on personal authentication using palmprints can be divided into three categories; appearance- , line -, and texture-based. A combination of these approaches can be used to achieve higher performance. We propose to simultaneously extract palmprint features from PCA, Line detectors and Gabor-filters and combine their corresponding matching scores. This paper also investigates the comparative performance of simple combination rules and the hybrid fusion strategy to achieve performance improvement. Our experimental results on the database of 100 users demonstrate the usefulness of such approach over those based on individual classifiers.
Memari, Nogol; Ramli, Abd Rahman; Bin Saripan, M Iqbal; Mashohor, Syamsiah; Moghbel, Mehrdad
2017-01-01
The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.
A method for detecting fungal contaminants in wall cavities.
Spurgeon, Joe C
2003-01-01
This article describes a practical method for detecting the presence of both fungal spores and culturable fungi in wall cavities. Culturable fungi were collected in 25 mm cassettes containing 0.8 microm mixed cellulose ester filters using aggressive sampling conditions. Both culturable fungi and fungal spores were collected in modified slotted-disk cassettes. The sample volume was 4 L. The filters were examined microscopically and dilution plated onto multiple culture media. Collecting airborne samples in filter cassettes was an effective method for assessing wall cavities for fungal contaminants, especially because this method allowed the sample to be analyzed by both microscopy and culture media. Assessment criteria were developed that allowed the sample results to be used to classify wall cavities as either uncontaminated or contaminated. As a criterion, wall cavities with concentrations of culturable fungi below the limit of detection (LOD) were classified as uncontaminated, whereas those cavities with detectable concentrations of culturable fungi were classified as contaminated. A total of 150 wall cavities was sampled as part of a field project. The concentrations of culturable fungi were below the LOD in 34% of the samples, whereas Aspergillus and/or Penicillium were the only fungal genera detected in 69% of the samples in which culturable fungi were detected. Spore counting resulted in the detection of Stachybotrys-like spores in 25% of the samples that were analyzed, whereas Stachybotrys chartarum colonies were only detected on 2% of malt extract agar plates and on 6% of corn meal agar plates.
Mixing apples with oranges: Visual attention deficits in schizophrenia.
Caprile, Claudia; Cuevas-Esteban, Jorge; Ochoa, Susana; Usall, Judith; Navarra, Jordi
2015-09-01
Patients with schizophrenia usually present cognitive deficits. We investigated possible anomalies at filtering out irrelevant visual information in this psychiatric disorder. Associations between these anomalies and positive and/or negative symptomatology were also addressed. A group of individuals with schizophrenia and a control group of healthy adults performed a Garner task. In Experiment 1, participants had to rapidly classify visual stimuli according to their colour while ignoring their shape. These two perceptual dimensions are reported to be "separable" by visual selective attention. In Experiment 2, participants classified the width of other visual stimuli while trying to ignore their height. These two visual dimensions are considered as being "integral" and cannot be attended separately. While healthy perceivers were, in Experiment 1, able to exclusively respond to colour, an irrelevant variation in shape increased colour-based reaction times (RTs) in the group of patients. In Experiment 2, RTs when classifying width increased in both groups as a consequence of perceiving a variation in the irrelevant dimension (height). However, this interfering effect was larger in the group of schizophrenic patients than in the control group. Further analyses revealed that these alterations in filtering out irrelevant visual information correlated with positive symptoms in PANSS scale. A possible limitation of the study is the relatively small sample. Our findings suggest the presence of attention deficits in filtering out irrelevant visual information in schizophrenia that could be related to positive symptomatology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modified Coaxial Probe Feeds for Layered Antennas
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Chu, Andrew W.; Dobbins, Justin A.; Lin, Greg Y.
2006-01-01
In a modified configuration of a coaxial probe feed for a layered printed-circuit antenna (e.g., a microstrip antenna), the outer conductor of the coaxial cable extends through the thickness of at least one dielectric layer and is connected to both the ground-plane conductor and a radiator-plane conductor. This modified configuration simplifies the incorporation of such radio-frequency integrated circuits as power dividers, filters, and low-noise amplifiers. It also simplifies the design and fabrication of stacked antennas with aperture feeds.
LOFT. Mobile test building (TAN624) is recycled from ANP program ...
LOFT. Mobile test building (TAN-624) is recycled from ANP program for placement before LOFT containment building door. It has not yet been connected to containment building. Note borated water tank at right of dome. Narrow, vertical structure at right of door is shroud is shroud for air exhaust duct. Filter vaults lie between duct shroud and stack. Camera facing westerly. Date: 1974. INEEL negative no. 74-1072 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Henze, M.; Sala, G.; Jose, J.; Figueira, J.; Hernanz, M.
2016-06-01
We report the discovery of a new nova candidate in the M81 galaxy on 16x200s stacked R filter CCD images, obtained with the 80 cm Ritchey-Chretien F/9.6 Joan Oro telescope at Observatori Astronomic del Montsec, owned by the Catalan Government and operated by the Institut d'Estudis Espacials de Catalunya, Spain, using a Finger Lakes PL4240-1-BI CCD Camera (with a Class 1 Basic Broadband coated 2k x 2k chip with 13.5 microns sq. pixels).
Recent progress in plasmonic colour filters for image sensor and multispectral applications
NASA Astrophysics Data System (ADS)
Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve
2016-04-01
Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.
Development of an Indexing Media Filtration System for Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2013-01-01
The effective maintenance of air quality aboard spacecraft cabins will be vital to future human exploration missions. A key component will be the air cleaning filtration system which will need to remove a broad size range of particles derived from multiple biological and material sources. In addition, during surface missions any extraterrestrial planetary dust, including dust generated by near-by ISRU equipment, which is tracked into the habitat will also need to be managed by the filtration system inside the pressurized habitat compartments. An indexing media filter system is being developed to meet the demand for long-duration missions that will result in dramatic increases in filter service life and loading capacity, and will require minimal crew involvement. The filtration system consists of three stages: an inertial impactor stage, an indexing media stage, and a high-efficiency filter stage, packaged in a stacked modular cartridge configuration. Each stage will target a specific range of particle sizes that optimize the filtration and regeneration performance of the system. An 1/8th scale and full-scale prototype of the filter system have been fabricated and have been tested in the laboratory and reduced gravity environments that simulate conditions on spacecrafts, landers and habitats. Results from recent laboratory and reduce-gravity flight tests data will be presented. The features of the new filter system may also benefit other closed systems, such as submarines, and remote location terrestrial installations where servicing and replacement of filter units is not practical.
Du, Tianchuan; Liao, Li; Wu, Cathy H; Sun, Bilin
2016-11-01
Protein-protein interactions play essential roles in many biological processes. Acquiring knowledge of the residue-residue contact information of two interacting proteins is not only helpful in annotating functions for proteins, but also critical for structure-based drug design. The prediction of the protein residue-residue contact matrix of the interfacial regions is challenging. In this work, we introduced deep learning techniques (specifically, stacked autoencoders) to build deep neural network models to tackled the residue-residue contact prediction problem. In tandem with interaction profile Hidden Markov Models, which was used first to extract Fisher score features from protein sequences, stacked autoencoders were deployed to extract and learn hidden abstract features. The deep learning model showed significant improvement over the traditional machine learning model, Support Vector Machines (SVM), with the overall accuracy increased by 15% from 65.40% to 80.82%. We showed that the stacked autoencoders could extract novel features, which can be utilized by deep neural networks and other classifiers to enhance learning, out of the Fisher score features. It is further shown that deep neural networks have significant advantages over SVM in making use of the newly extracted features. Copyright © 2016. Published by Elsevier Inc.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
Healy, M G; Burke, P; Rodgers, M
2010-10-01
The aim of this study was to examine the performance of intermittently loaded, 150 mm-diameter stratified filter columns of 2 depths (0.65 and 0.375 m) comprising different media--sand, crushed glass and soil--in polishing the effluent from a laboratory horizontal flow biofilm reactor (HFBR) treating synthetic domestic-strength wastewater. The HFBR has been successfully used to remove organic carbon and ammonium-nitrogen (NH4-N) from domestic wastewater. In this treatment method, wastewater is allowed to flow over and back along a stack of polyvinyl chloride (PVC) sheets. Biofilms on the sheets reduce organic carbon, suspended matter, and nutrients in the wastewater, but to achieve the quality of a septic tank system, additional treatment is required. In all filters, at a hydraulic loading rate of 100 L m(-2) d(-1), 40-65% of chemical oxygen demand (COD) and practically 100% of total suspended solids (TSS) were removed, nitrification was complete, and bacterial numbers were reduced by over 80%, with best removals achieved in the soil filters (93%). Soil polishing filters with the depth of 0.65 m performed best in terms of organic carbon, total nitrogen (Tot-N) and bacterial removal. Data from this preliminary study are useful in the design of treatment systems to polish secondary wastewaters with similar water quality characteristics.
Texture classification of normal tissues in computed tomography using Gabor filters
NASA Astrophysics Data System (ADS)
Dettori, Lucia; Bashir, Alia; Hasemann, Julie
2007-03-01
The research presented in this article is aimed at developing an automated imaging system for classification of normal tissues in medical images obtained from Computed Tomography (CT) scans. Texture features based on a bank of Gabor filters are used to classify the following tissues of interests: liver, spleen, kidney, aorta, trabecular bone, lung, muscle, IP fat, and SQ fat. The approach consists of three steps: convolution of the regions of interest with a bank of 32 Gabor filters (4 frequencies and 8 orientations), extraction of two Gabor texture features per filter (mean and standard deviation), and creation of a Classification and Regression Tree-based classifier that automatically identifies the various tissues. The data set used consists of approximately 1000 DIACOM images from normal chest and abdominal CT scans of five patients. The regions of interest were labeled by expert radiologists. Optimal trees were generated using two techniques: 10-fold cross-validation and splitting of the data set into a training and a testing set. In both cases, perfect classification rules were obtained provided enough images were available for training (~65%). All performance measures (sensitivity, specificity, precision, and accuracy) for all regions of interest were at 100%. This significantly improves previous results that used Wavelet, Ridgelet, and Curvelet texture features, yielding accuracy values in the 85%-98% range The Gabor filters' ability to isolate features at different frequencies and orientations allows for a multi-resolution analysis of texture essential when dealing with, at times, very subtle differences in the texture of tissues in CT scans.
NASA Technical Reports Server (NTRS)
Hoffer, Roger M.; Hussin, Yousif Ali
1989-01-01
Multipolarized aircraft L-band radar data are classified using two different image classification algorithms: (1) a per-point classifier, and (2) a contextual, or per-field, classifier. Due to the distinct variations in radar backscatter as a function of incidence angle, the data are stratified into three incidence-angle groupings, and training and test data are defined for each stratum. A low-pass digital mean filter with varied window size (i.e., 3x3, 5x5, and 7x7 pixels) is applied to the data prior to the classification. A predominately forested area in northern Florida was the study site. The results obtained by using these image classifiers are then presented and discussed.
Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong
2004-09-01
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
Herman, David T [Aiken, SC; Maxwell, David N [Aiken, SC
2011-04-19
A rotary filtration apparatus for filtering a feed fluid into permeate is provided. The rotary filtration apparatus includes a container that has a feed fluid inlet. A shaft is at least partially disposed in the container and has a passageway for the transport of permeate. A disk stack made of a plurality of filtration disks is mounted onto the shaft so that rotation of the shaft causes rotation of the filtration disks. The filtration disks may be made of steel components and may be welded together. The shaft may penetrate a filtering section of the container at a single location. The rotary filtration apparatus may also incorporate a bellows seal to prevent leakage along the shaft, and an around the shaft union rotary joint to allow for removal of permeate. Various components of the rotary filtration apparatus may be removed as a single assembly.
NASA Astrophysics Data System (ADS)
Gallego, E. E.; Ascorbe, J.; Del Villar, I.; Corres, J. M.; Matias, I. R.
2018-05-01
This work describes the process of nanofabrication of phase-shifted Bragg gratings on the end facet of a multimode optical fiber with a pulsed DC sputtering system based on a single target. Several structures have been explored as a function of parameters such as the number of layers or the phase-shift. The experimental results, corroborated with simulations based on plane-wave propagation in a stack of homogeneous layers, indicate that the phase-shift can be controlled with a high degree of accuracy. The device could be used both in communications, as a filter, or in the sensors domain. As an example of application, a humidity sensor with wavelength shifts of 12 nm in the range of 30 to 90% relative humidity (200 pm/% relative humidity) is presented.
Visual terrain mapping for traversable path planning of mobile robots
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Amrani, Rachida; Tunstel, Edward W.
2004-10-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2006-05-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Acoustic Type-II Weyl Nodes from Stacking Dimerized Chains
NASA Astrophysics Data System (ADS)
Yang, Zhaoju; Zhang, Baile
2016-11-01
Lorentz-violating type-II Weyl fermions, which were missed in Weyl's prediction of nowadays classified type-I Weyl fermions in quantum field theory, have recently been proposed in condensed matter systems. The semimetals hosting type-II Weyl fermions offer a rare platform for realizing many exotic physical phenomena that are different from type-I Weyl systems. Here we construct the acoustic version of a type-II Weyl Hamiltonian by stacking one-dimensional dimerized chains of acoustic resonators. This acoustic type-II Weyl system exhibits distinct features in a finite density of states and unique transport properties of Fermi-arc-like surface states. In a certain momentum space direction, the velocity of these surface states is determined by the tilting direction of the type-II Weyl nodes rather than the chirality dictated by the Chern number. Our study also provides an approach of constructing acoustic topological phases at different dimensions with the same building blocks.
Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli
2002-01-01
Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.
Kanai, Tatsuaki; Kanematsu, Nobuyuki; Minohara, Shinichi; Komori, Masataka; Torikoshi, Masami; Asakura, Hiroshi; Ikeda, Noritoshi; Uno, Takayuki; Takei, Yuka
2006-08-01
The commissioning of conformal radiotherapy system using heavy-ion beams at the Heavy Ion Medical Accelerator in Chiba (HIMAC) is described in detail. The system at HIMAC was upgraded for a clinical trial using a new technique: large spot uniform scanning with conformal layer stacking. The system was developed to localize the irradiation dose to the target volume more effectively than with the old system. With the present passive irradiation method using a ridge filter, a scatterer, a pair of wobbler magnets, and a multileaf collimator, the width of the spread-out Bragg peak (SOBP) in the radiation field could not be changed. With dynamic control of the beam-modifying devices during irradiation, a more conformal radiotherapy could be achieved. In order to safely perform treatments with this conformal therapy, the moving devices should be watched during irradiation and the synchronousness among the devices should be verified. This system, which has to be safe for patient irradiations, was constructed and tested for safety and for the quality of the dose localization realized. Through these commissioning tests, we were successfully able to prepare the conformal technique using layer stacking for patients. Subsequent to commissioning the technique has been applied to patients in clinical trials.
Imaging the Lower Crust and Moho Beneath Long Beach, CA Using Autocorrelations
NASA Astrophysics Data System (ADS)
Clayton, R. W.
2017-12-01
Three-dimensional images of the lower crust and Moho in a 10x10 km region beneath Long Beach, CA are constructed from autocorrelations of ambient noise. The results show the Moho at a depth of 15 km at the coast and dipping at 45 degrees inland to a depth of 25 km. The shape of the Moho interface is irregular in both the coast perpendicular and parallel directions. The lower crust appears as a zone of enhanced reflectivity with numerous small-scale structures. The autocorrelations are constructed from virtual source gathers that were computed from the dense Long Beach array that were used in the Lin et al (2013) study. All near zero-offset traces within a 200 m disk are stacked to produce a single autocorrelation at that point. The stack typically is over 50-60 traces. To convert the auto correlation to reflectivity as in Claerbout (1968), the noise source autocorrelation, which is estimated as the average of all autocorrelations is subtracted from each trace. The subsurface image is then constructed with a 0.1-2 Hz filter and AGC scaling. The main features of the image are confirmed with broadband receiver functions from the LASSIE survey (Ma et al, 2016). The use of stacked autocorrelations extends ambient noise into the lower crust.
NASA Astrophysics Data System (ADS)
de Denus-Baillargeon, Marie-Maude
2007-05-01
Light coming from far-away astronomical objects carries a variety of information ranging from chemical composition to distance and kinematics. Amongst these astronomical bodies, galaxies are widely studied objects: they are slowly rotating entities made of gas, stars and dark matter, and their properties are broadly distributed. Rotation velocities of galaxies yield very important information, namely the mass enclosed in the rotation radius, and thus the respective distribution of luminous and dark matter. To determine the rotation velocity, the Doppler effect is a convenient tool. As an emission or absorption line shifts from its reference position, it is possible to calculate the approaching or receding velocity. The maximal rotation velocity difference between the approaching and receding sides is at most a few hundreds of km/s, which translates in a few nm shift from the rest wavelength at most, thus calling for very precise spectral information.Due to their distance, the objects observed with astronomical instrumentation are very faint. Optical instruments for astronomy thus require high throughput optical film systems, particularly those based on notch/bandpass filters with low/high in-band transmission and high/low out-of-band blocking power. This calls for very high film uniformity and high precision of film monitoring and process control. Such filters must also survive extreme environmental conditions ranging from fresh and humid climate to cryogenic temperatures.In the present work, we describe all steps leading from filter design to filter fabrication, process monitoring, and characterization. In particular, we focus on the comparison of the performance of graded-index (rugate) filters and quarter-wave stack narrowband filters deposited by plasma enhanced chemical vapor deposition and dual ion beam sputtering using SiO 2 , TiO 2 and Ta 2 O 5.Optical and mechanical properties of the individual films have been evaluated and are consistent with those found in the litterature reporting on the same tech niques. Namely, we find values of compressive stress of 160 and 410 MPa for layers of Ta 2 O 5 and SiO 2 deposited by DIBS and of 150 and 60 MPa for PECVD- deposited SiO 2 /TiO 2 mixtures rich in SiO 2 and TiO 2 respectively. Young's modulus of 109, 73, 55 and 94 GPa and refraction index of 2,13, 1,49, 1,59 and 2,09 have also been measured for those same materials. Properties of materials mixtures behave qualitatively as the ones reported in references on the subject.Attention is paid to the effect of temperature on the variation of the central wavelength and bandpass width. The results are discussed in terms of film material and filter design. We report variations of ~ =0,04°C for multilayers DIBS-produced filters and -0,0041/°C and 0,19°C for PECVD-deposited quarter- wave stacks and rugate filters respectively. These results match the predictions made by Takashashi's formulae. The bandwidth varies as well with temperature, and the extent of the variation seems related to the number of cavities in the filter. Further work is still needed in order to clearly establish the relation between the number of cavities and the bandpass' narrowing/widening with temperature.
NASA Astrophysics Data System (ADS)
de Denus-Baillargeon, Marie-Maude
Light coming from far-away astronomical objects carries a variety of information ranging from chemical composition to distance and kinematics. Amongst these astronomical bodies, galaxies are widely studied objects: they are slowly rotating entities made of gas, stars and dark matter, and their properties are broadly distributed. Rotation velocities of galaxies yield very important information, namely the mass enclosed in the rotation radius, and thus the respective distribution of luminous and dark matter. To determine the rotation velocity, the Doppler effect is a convenient tool. As an emission or absorption line shifts from its reference position, it is possible to calculate the approaching or receding velocity. The maximal rotation velocity difference between the approaching and receding sides is at most a few hundreds of km/s, which translates in a few nm shift from the rest wavelength at most, thus calling for very precise spectral information. Due to their distance, the objects observed with astronomical instrumentation are very faint. Optical instruments for astronomy thus require high throughput optical film systems, particularly those based on notch/bandpass filters with low/high in-band transmission and high/low out-of-band blocking power. This calls for very high film uniformity and high precision of film monitoring and process control. Such filters must also survive extreme environmental conditions ranging from fresh and humid climate to cryogenic temperatures. In the present work, we describe all steps leading from filter design to filter fabrication, process monitoring, and characterization. In particular, we focus on the comparison of the performance of graded-index (rugate) filters and quarter-wave stack narrowband filters deposited by plasma enhanced chemical vapor deposition and dual ion beam sputtering using SiO 2 , TiO 2 and Ta 2 O 5. Optical and mechanical properties of the individual films have been evaluated and are consistent with those found in the litterature reporting on the same techniques. Namely, we find values of compressive stress of 160 and 410 MPa for layers of Ta 2 O 5 and SiO 2 deposited by DIBS and of 150 and 60 MPa for PECVD- deposited SiO 2 /TiO 2 mixtures rich in SiO 2 and TiO 2 respectively. Young's modulus of 109, 73, 55 and 94 GPa and refraction index of 2,13, 1,49, 1,59 and 2,09 have also been measured for those same materials. Properties of materials mixtures behave qualitatively as the ones reported in references on the subject. Attention is paid to the effect of temperature on the variation of the central wavelength and bandpass width. The results are discussed in terms of film material and filter design. We report variations of ~ =0,04Å/°C for multilayers DIBS-produced filters and -0,0041/°C and 0,19Å/°C for PECVD-deposited quarter- wave stacks and rugate filters respectively. These results match the predictions made by Takashashi's formulae. The bandwidth varies as well with temperature, and the extent of the variation seems related to the number of cavities in the filter. Further work is still needed in order to clearly establish the relation between the number of cavities and the bandpass' narrowing/widening with temperature.
Bayesian learning for spatial filtering in an EEG-based brain-computer interface.
Zhang, Haihong; Yang, Huijuan; Guan, Cuntai
2013-07-01
Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.
Feature Selection for Chemical Sensor Arrays Using Mutual Information
Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.
2014-01-01
We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058
Joseph, Adrian; Kenty, Brian; Mollet, Michael; Hwang, Kenneth; Rose, Steven; Goldrick, Stephen; Bender, Jean; Farid, Suzanne S.
2016-01-01
ABSTRACT In the production of biopharmaceuticals disk‐stack centrifugation is widely used as a harvest step for the removal of cells and cellular debris. Depth filters followed by sterile filters are often then employed to remove residual solids remaining in the centrate. Process development of centrifugation is usually conducted at pilot‐scale so as to mimic the commercial scale equipment but this method requires large quantities of cell culture and significant levels of effort for successful characterization. A scale‐down approach based upon the use of a shear device and a bench‐top centrifuge has been extended in this work towards a preparative methodology that successfully predicts the performance of the continuous centrifuge and polishing filters. The use of this methodology allows the effects of cell culture conditions and large‐scale centrifugal process parameters on subsequent filtration performance to be assessed at an early stage of process development where material availability is limited. Biotechnol. Bioeng. 2016;113: 1934–1941. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:26927621
Nanotechnology Infrared Optics for Astronomy Missions
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Stringfellow, Guy (Technical Monitor)
2002-01-01
The program "Nanotechnology Infrared Optics for Astronomy Missions" will design and develop new, nanotechnology techniques for infrared optical devices suitable for use in NASA space missions. The proposal combines expertise from the Smithsonian Astrophysical Observatory, the Naval Research Laboratory, the Goddard Space Flight Center, and the Physics Department at the Queen Mary and Westfield College in London, now relocated to the University of Cardiff, Cardiff, Wales. The method uses individually tailored metal grids, and layered stacks of metal mesh grids, both inductive (free-standing) and capacitive (substrate-mounted), to produce various kinds of filters. The program has the following goals: (1) Model FIR filter properties using electric-circuit analogs, and near-field, EM diffraction calculations; (2) Prototype fabrication of meshes on various substrates, with various materials, and of various dimensions; (3) Test of filter prototypes, and iterate with the modeling programs; (4) Travel to related sites, including trips to Washington, D.C. (location of NRL and GSFC), London (location of QMW), Cardiff, Wales, and Rome (location of ISO PMS project headquarters); (5) Produce ancillary science, including publication of both testing on mesh performance and infrared astronomical science.
Nanotechnology Infrared Optics for Astronomy Missions
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Frogel, Jay (Technical Monitor)
2003-01-01
The program "Nanotechnology Infrared Optics for Astronomy Missions" will design and develop new, nanotechnology techniques for infrared optical devices suitable for use in NASA space missions. The proposal combines expertise from the Smithsonian Astrophysical Observatory, the Naval Research Laboratory, the Goddard Space Flight Center, and the Physics Department at the Queen Mary and Westfield College in London, now relocated to the University of Cardiff, Cardiff, Wales. The method uses individually tailored metal grids and layered stacks of metal mesh grids, both inductive (freestanding) and capacitive (substrate-mounted), to produce various kinds of filters. The program has the following goals: 1) Model FIR filter properties using electric-circuit analogs and near-field, EM diffraction calculations. 2) Prototype fabrication of meshes on various substrates, with various materials, and of various dimensions. 3) Test filter prototypes and iterate with the modeling programs. 4) Travel to related sites, including trips to Washington, D.C. (location of NRL and GSFC), London (location of QMW), Cardiff, Wales, and Rome (location of ISO PMS project headquarters). 5) Produce ancillary science, including both publication of testing on mesh performance and infrared astronomical science.
NASA Astrophysics Data System (ADS)
Gillespie, M. I.; Kriek, R. J.
2017-12-01
A membraneless Divergent Electrode-Flow-Through (DEFT™) alkaline electrolyser, for unlocking profitable hydrogen production by combining a simplistic, inexpensive, modular and durable design, capable of overcoming existing technology current density thresholds, is ideal for decentralised renewable hydrogen production, with the only requirement of electrolytic flow to facilitate high purity product gas separation. Scale-up of the technology was performed, representing a deviation from the original tested stack design, incorporating elongated electrodes housed in a filter press assembly. The pilot plant operating parameters were limited to a low flow velocity range (0.03 m s-1 -0.04 m s-1) with an electrode gap of 2.5 mm. Performance of this pilot plant demonstrated repeatability to results previously obtained. Mesh electrodes with geometric area of 344.32 cm2 were used for plant performance testing. A NiO anode and Ni cathode combination developed optimal performance yielding 508 mA cm-2 at 2 VDC in contrast to a Ni anode and cathode combination providing 467 mA cm-2 at 2.26 VDC at 0.04 m s-1, 30% KOH and 80 °C. An IrO2/RuO2/TiO2 anode and Pt cathode combination underwent catalyst deactivation. Owing to the nature of the gas/liquid separation system, gas qualities were inadequate compared to results achieved previously. Future improvements will provide qualities similar to results achieved before.
Improved space object detection using short-exposure image data with daylight background.
Becker, David; Cain, Stephen
2018-05-10
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. The detection algorithms employed play a crucial role in fulfilling the detection component in the space situational awareness mission to detect, track, characterize, and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator on long-exposure data to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follow a Gaussian distribution. Long-exposure imaging is critical to detection performance in these algorithms; however, for imaging under daylight conditions, it becomes necessary to create a long-exposure image as the sum of many short-exposure images. This paper explores the potential for increasing detection capabilities for small and dim space objects in a stack of short-exposure images dominated by a bright background. The algorithm proposed in this paper improves the traditional stack and average method of forming a long-exposure image by selectively removing short-exposure frames of data that do not positively contribute to the overall signal-to-noise ratio of the averaged image. The performance of the algorithm is compared to a traditional matched filter detector using data generated in MATLAB as well as laboratory-collected data. The results are illustrated on a receiver operating characteristic curve to highlight the increased probability of detection associated with the proposed algorithm.
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Promoter classifier: software package for promoter database analysis.
Gershenzon, Naum I; Ioshikhes, Ilya P
2005-01-01
Promoter Classifier is a package of seven stand-alone Windows-based C++ programs allowing the following basic manipulations with a set of promoter sequences: (i) calculation of positional distributions of nucleotides averaged over all promoters of the dataset; (ii) calculation of the averaged occurrence frequencies of the transcription factor binding sites and their combinations; (iii) division of the dataset into subsets of sequences containing or lacking certain promoter elements or combinations; (iv) extraction of the promoter subsets containing or lacking CpG islands around the transcription start site; and (v) calculation of spatial distributions of the promoter DNA stacking energy and bending stiffness. All programs have a user-friendly interface and provide the results in a convenient graphical form. The Promoter Classifier package is an effective tool for various basic manipulations with eukaryotic promoter sequences that usually are necessary for analysis of large promoter datasets. The program Promoter Divider is described in more detail as a representative component of the package.
Zhang, Y N
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed.
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed. PMID:29075547
Symbolic dynamic filtering and language measure for behavior identification of mobile robots.
Mallapragada, Goutham; Ray, Asok; Jin, Xin
2012-06-01
This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.
Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model
Wang, Guofeng; Yang, Yinwei; Li, Zhimeng
2014-01-01
Tool condition monitoring (TCM) plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM), hidden Markov model (HMM) and radius basis function (RBF) are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR) algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability. PMID:25405514
Force sensor based tool condition monitoring using a heterogeneous ensemble learning model.
Wang, Guofeng; Yang, Yinwei; Li, Zhimeng
2014-11-14
Tool condition monitoring (TCM) plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM), hidden Markov model (HMM) and radius basis function (RBF) are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR) algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.
Real-time Java simulations of multiple interference dielectric filters
NASA Astrophysics Data System (ADS)
Kireev, Alexandre N.; Martin, Olivier J. F.
2008-12-01
An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations
E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers
NASA Technical Reports Server (NTRS)
Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.
2005-01-01
Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.
Classifying EEG for Brain-Computer Interface: Learning Optimal Filters for Dynamical System Features
Song, Le; Epps, Julien
2007-01-01
Classification of multichannel EEG recordings during motor imagination has been exploited successfully for brain-computer interfaces (BCI). In this paper, we consider EEG signals as the outputs of a networked dynamical system (the cortex), and exploit synchronization features from the dynamical system for classification. Herein, we also propose a new framework for learning optimal filters automatically from the data, by employing a Fisher ratio criterion. Experimental evaluations comparing the proposed dynamical system features with the CSP and the AR features reveal their competitive performance during classification. Results also show the benefits of employing the spatial and the temporal filters optimized using the proposed learning approach. PMID:18364986
Prototyping of MWIR MEMS-based optical filter combined with HgCdTe detector
NASA Astrophysics Data System (ADS)
Kozak, Dmitry A.; Fernandez, Bautista; Velicu, Silviu; Kubby, Joel
2010-02-01
In the past decades, there have been several attempts to create a tunable optical detector with operation in the infrared. The drive for creating such a filter is its wide range of applications, from passive night vision to biological and chemical sensors. Such a device would combine a tunable optical filter with a wide-range detector. In this work, we propose using a Fabry-Perot interferometer centered in the mid-wave infrared (MWIR) spectrum with an HgCdTe detector. Using a MEMS-based interferometer with an integrated Bragg stack will allow in-plane operation over a wide range. Because such devices have a tendency to warp, creating less-than-perfect optical surfaces, the Fabry-Perot interferometer is prototyped using the SOI-MUMPS process to ensure desirable operation. The mechanical design is aimed at optimal optical flatness of the moving membranes and a low operating voltage. The prototype is tested for these requirements. An HgCdTe detector provides greater performance than a pyroelectic detector used in some previous work, allowing for lower noise, greater detection speed and higher sensitivity. Both a custom HgCdTe detector and commercially available pyroelectric detector are tested with commercial optical filter. In previous work, monolithic integration of HgCdTe detectors with optical filters proved to be problematic. Part of this work investigates the best approach to combining these two components, either monolithically in HgCdTe or using a hybrid packaging approach where a silicon MEMS Fabry-Perot filter is bonded at low temperature to a HgCdTe detector.
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906
Stacked bilayer phosphorene: strain-induced quantum spin Hall state and optical measurement
Zhang, Tian; Lin, Jia-He; Yu, Yan-Mei; Chen, Xiang-Rong; Liu, Wu-Ming
2015-01-01
Bilayer phosphorene attracted considerable interest, giving a potential application in nanoelectronics owing to its natural bandgap and high carrier mobility. However, very little is known regarding the possible usefulness in spintronics as a quantum spin Hall (QSH) state of material characterized by a bulk energy gap and gapless spin-filtered edge states. Here, we report a strain-induced topological phase transition from normal to QSH state in bilayer phosphorene, accompanied by band-inversion that changes number from 0 to 1, which is highly dependent on interlayer stacking. When the bottom layer is shifted by 1/2 unit-cell along zigzag/armchair direction with respect to the top layer, the maximum topological bandgap 92.5 meV is sufficiently large to realize QSH effect even at room-temperature. An optical measurement of QSH effect is therefore suggested in view of the wide optical absorption spectrum extending to far infra-red, making bilayer phosphorene a promising candidate for opto-spintronic devices. PMID:26370771
Second Harmonic Generation Imaging Analysis of Collagen Arrangement in Human Cornea.
Park, Choul Yong; Lee, Jimmy K; Chuck, Roy S
2015-08-01
To describe the horizontal arrangement of human corneal collagen bundles by using second harmonic generation (SHG) imaging. Human corneas were imaged with an inverted two photon excitation fluorescence microscope. The excitation laser (Ti:Sapphire) was tuned to 850 nm. Backscatter signals of SHG were collected through a 425/30-nm bandpass emission filter. Multiple, consecutive, and overlapping image stacks (z-stacks) were acquired to generate three dimensional data sets. ImageJ software was used to analyze the arrangement pattern (irregularity) of collagen bundles at each image plane. Collagen bundles in the corneal lamellae demonstrated a complex layout merging and splitting within a single lamellar plane. The patterns were significantly different in the superficial and limbal cornea when compared with deep and central regions. Collagen bundles were smaller in the superficial layer and larger in deep lamellae. By using SHG imaging, the horizontal arrangement of corneal collagen bundles was elucidated at different depths and focal regions of the human cornea.
Arshad, Sannia; Rho, Seungmin
2014-01-01
We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302
Khalid, Shehzad; Arshad, Sannia; Jabbar, Sohail; Rho, Seungmin
2014-01-01
We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.
Obscenity detection using haar-like features and Gentle Adaboost classifier.
Mustafa, Rashed; Min, Yang; Zhu, Dingju
2014-01-01
Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.
Broadband diffractive lens or imaging element
Ceglio, Natale M.; Hawryluk, Andrew M.; London, Richard A.; Seppala, Lynn G.
1991-01-01
A broadband diffractive lens or imaging element produces a sharp focus and/or a high resolution image with broad bandwidth illuminating radiation. The diffractive lens is sectored or segmented into regions, each of which focuses or images a distinct narrowband of radiation but all of which have a common focal length. Alternatively, a serial stack of minus filters, each with a diffraction pattern which focuses or images a distinct narrowband of radiation but all of which have a common focal length, is used. The two approaches can be combined. Multifocal broadband diffractive elements can also be formed.
New disk nova candidate in M 31
NASA Astrophysics Data System (ADS)
Henze, M.; Pietsch, W.; Burwitz, V.; Rodriguez, J.; Bochinski, J.; Busuttil, R.; Haswell, C. A.; Holmes, S.; Kolb, U.
2012-02-01
We report the discovery of a possible nova in the south-western disk of M 31 on a 5x120s dithered stacked CCD image obtained with the Open University PIRATE Planewave CDK17 0.43m Dall-Kirkham f/6.7 telescope at the Observatorio Astronomico de Mallorca (Costitx, Spain), using an SBIG STX 16803 CCD Camera (with a Kodak 4k x 4k chip with 9 microns sq. pixels) and Baader clear filter, on 2012 Feb 15.803 UT with a R magnitude of 17.5 (accuracy of 0.2 mag).
1990-03-01
Township, New Jersey, U.S. Army Corps of Engineers , DACW 41-88-R-0162. b - Source: California List, 40 CFR Part 268, Subpart D. c - Source: Substances...Design Bog Creek Farm Site, Superfund Project, Howell Township, New Jersey, U.S. Army Corps of Engineers , DACW 41-88-R-0162. b - Source: California List...stack testing was done to numerically specify the per- formance of this unit. I Dust collected by the fabric filter was minimal. Since condensation
New optical nova candidate in the M 31 disk
NASA Astrophysics Data System (ADS)
Henze, M.; Sala, G.; Jose, J.; Figueira, J.; Hernanz, M.; Pietsch, W.,
2014-07-01
We report the discovery of a possible nova in the disk of M 31 on two 4x200s stacked R filter CCD images, obtained with the the 80 cm Ritchey-Chretien F/9.6 Joan Oro telescope at Observatori Astronomic del Montsec, owned by the Catalan Government and operated by the Institut d'Estudis Espacials de Catalunya, Spain, using a Finger Lakes PL4240-1-BI CCD Camera (with a Class 1 Basic Broadband coated 2k x 2k chip with 13.5 microns sq.
Designing a Small-Sized Engineering Model of Solar EUV Telescopr for a Korean Satellite
NASA Astrophysics Data System (ADS)
Han, Jung-Hoon; Jang, Min-Hwan; Kim, Sang-Joon
2001-11-01
For the research of solar EUV (extreme ultraviolet) radiation, we have designed a small-sized engineering model of solar EUV telescope, which is suitable for a Korean satellite. The EUV solar telescope was designed to observe the sun at 584.3Å (He¥°) and 629.7Å (O¥´). The optical system is an f/8 Ritchey-Chrètien, and the effective diameter and focal length are 80§® and 640§®, respectively. The He¥°and O¥´ filters are loaded in a filter wheel. In the detection part, the MCP (MicroChannel Plate) type is Z-stack, and the channel-to-diameter ratio is 40:1. MCP and CCD are connected by fiber optic taper. A commercial optical design software is used for the analysis of the optical system design.
Denoising time-domain induced polarisation data using wavelet techniques
NASA Astrophysics Data System (ADS)
Deo, Ravin N.; Cull, James P.
2016-05-01
Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.
A design multifunctional plasmonic optical device by micro ring system
NASA Astrophysics Data System (ADS)
Pornsuwancharoen, N.; Youplao, P.; Amiri, I. S.; Ali, J.; Yupapin, P.
2018-03-01
A multi-function electronic device based on the plasmonic circuit is designed and simulated by using the micro-ring system. From which a nonlinear micro-ring resonator is employed and the selected electronic devices such as rectifier, amplifier, regulator and filter are investigated. A system consists of a nonlinear micro-ring resonator, which is known as a modified add-drop filter and made of an InGaAsP/InP material. The stacked waveguide of an InGaAsP/InP - graphene -gold/silver is formed as a part of the device, the required output signals are formed by the specific control of input signals via the input and add ports. The material and device aspects are reviewed. The simulation results are obtained using the Opti-wave and MATLAB software programs, all device parameters are based on the fabrication technology capability.
NASA Astrophysics Data System (ADS)
Baratin, L. M.; Chamberlain, C. J.; Townend, J.; Savage, M. K.
2016-12-01
Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the state of stress of this major transpressive margin prior to a large (≥M8) earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault toward an anticipated large rupture. We initially work with a continous seismic dataset collected between 2009 and 2012 from an array of short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates are used in an iterative matched-filter and stacking routine. This method allows the detection of similar signals and establishes LFE families with common locations. We thus generate a 36 month catalogue of 10718 LFEs. The detections are then combined for each LFE family using phase-weighted stacking to yield a signal with the highest possible signal to noise ratio. We found phase-weighted stacking to be successful in increasing the number of LFE detections by roughly 20%. Phase-weighted stacking also provides cleaner phase arrivals of apparently impulsive nature allowing more precise phase and polarity picks. We then compute improved non-linear earthquake locations using a 3D velocity model. We find LFEs to occur below the seismogenic zone at depths of 18-34 km, locating on or near the proposed deep extent of the Alpine Fault. Our next step is to estimate seismic source parameters by implementing a moment tensor inversion technique. Our focus is currently on generating a more extensive catalogue (spanning the years 2009 to 2016) using synthetic waveforms as primary templates, with which to detect LFEs. Initial testing shows that this technique paired up with phase-weighted stacking increases the number of LFE families and overall detected events roughly sevenfold. This catalogue should provide new insight into the geometry of the Alpine Fault and the prevailing stress field in the central Southern Alps.
NASA Astrophysics Data System (ADS)
Bauer, Klaus; Pussak, Marcin; Stiller, Manfred; Bujakowski, Wieslaw
2014-05-01
Self-organizing maps (SOM) are neural network techniques which can be used for the joint interpretation of multi-disciplinary data sets. In this investigation we apply SOM within a geothermal exploration project using 3D seismic reflection data. The study area is located in the central part of the Polish basin. Several sedimentary target horizons were identified at this location based on fluid flow rate measurements in the geothermal research well Kompina-2. The general objective is a seismic facies analysis and characterization of the major geothermal target reservoir. A 3D seismic reflection experiment with a sparse acquisition geometry was carried out around well Kompina-2. Conventional signal processing (amplitude corrections, filtering, spectral whitening, deconvolution, static corrections, muting) was followed by normal-moveout (NMO) stacking, and, alternatively, by common-reflection-surface (CRS) stacking. Different signal attributes were then derived from the stacked images including root-mean-square (RMS) amplitude, instantaneous frequency and coherency. Furthermore, spectral decomposition attributes were calculated based on the continuous wavelet transform. The resulting attribute maps along major target horizons appear noisy after the NMO stack and clearly structured after the CRS stack. Consequently, the following SOM-based multi-parameter signal attribute analysis was applied only to the CRS images. We applied our SOM work flow, which includes data preparation, unsupervised learning, segmentation of the trained SOM using image processing techniques, and final application of the learned knowledge. For the Lower Jurassic target horizon Ja1 we derived four different clusters with distinct seismic attribute signatures. As the most striking feature, a corridor parallel to a fault system was identified, which is characterized by decreased RMS amplitudes and low frequencies. In our interpretation we assume that this combination of signal properties can be explained by increased fracture porosity and enhanced fluid saturation within this part of the Lower Jurassic sandstone horizon. Hence, we suggest that a future drilling should be carried out within this compartment of the reservoir.
Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente
2017-05-01
Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Patient-Specific Deep Architectural Model for ECG Classification
Luo, Kan; Cuschieri, Alfred
2017-01-01
Heartbeat classification is a crucial step for arrhythmia diagnosis during electrocardiographic (ECG) analysis. The new scenario of wireless body sensor network- (WBSN-) enabled ECG monitoring puts forward a higher-level demand for this traditional ECG analysis task. Previously reported methods mainly addressed this requirement with the applications of a shallow structured classifier and expert-designed features. In this study, modified frequency slice wavelet transform (MFSWT) was firstly employed to produce the time-frequency image for heartbeat signal. Then the deep learning (DL) method was performed for the heartbeat classification. Here, we proposed a novel model incorporating automatic feature abstraction and a deep neural network (DNN) classifier. Features were automatically abstracted by the stacked denoising auto-encoder (SDA) from the transferred time-frequency image. DNN classifier was constructed by an encoder layer of SDA and a softmax layer. In addition, a deterministic patient-specific heartbeat classifier was achieved by fine-tuning on heartbeat samples, which included a small subset of individual samples. The performance of the proposed model was evaluated on the MIT-BIH arrhythmia database. Results showed that an overall accuracy of 97.5% was achieved using the proposed model, confirming that the proposed DNN model is a powerful tool for heartbeat pattern recognition. PMID:29065597
Detection of chewing from piezoelectric film sensor signals using ensemble classifiers.
Farooq, Muhammad; Sazonov, Edward
2016-08-01
Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.
CrossTalk. The Journal of Defense Software Engineering. Volume 16, Number 11, November 2003
2003-11-01
memory area, and stack pointer. These systems are classified as preemptive or nonpreemptive depending on whether they can preempt an existing task or not...of charge. The Software Technology Support Center was established at Ogden Air Logistics Center (AFMC) by Headquarters U.S. Air Force to help Air...device. A script file could be a list of commands for a command interpreter such as a batch file [15]. A communications port consists of a queue to hold
Classification of Salmonella serotypes with hyperspectral microscope imagery
USDA-ARS?s Scientific Manuscript database
Previous research has demonstrated an optical method with acousto-optic tunable filter (AOTF) based hyperspectral microscope imaging (HMI) had potential for classifying gram-negative from gram-positive foodborne pathogenic bacteria rapidly and nondestructively with a minimum sample preparation. In t...
Development of an Indexing Media Filtration System for Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2013-01-01
The effective maintenance of air quality aboard spacecraft cabins will be vital to future human exploration missions. A key component will be the air cleaning filtration system which will need to remove a broad size range of particles including skin flakes, hair and clothing fibers, other biological matter, and particulate matter derived from material and equipment wear. In addition, during surface missions any extraterrestrial planetary dust, including dust generated by near-by ISRU equipment, which is tracked into the habitat will also need to be managed by the filtration system inside the pressurized habitat compartments. An indexing media filter system is being developed to meet the demand for long-duration missions that will result in dramatic increases in filter service life and loading capacity, and will require minimal crew involvement. These features may also benefit other closed systems, such as submarines, and remote location terrestrial installations where servicing and replacement of filter units is not practical. The filtration system consists of three stages: an inertial impactor stage, an indexing media stage, and a high-efficiency filter stage, packaged in a stacked modular cartridge configuration. Each stage will target a specific range of particle sizes that optimize the filtration and regeneration performance of the system. An 1/8th scale and full-scale prototype of the filter system have been fabricated and have been tested in the laboratory and reduced gravity environments that simulate conditions on spacecrafts, landers and habitats. Results from recent laboratory and reducegravity flight tests data will be presented.
Recent developments of film bulk acoustic resonators
NASA Astrophysics Data System (ADS)
Gao, Junning; Liu, Guorong; Li, Jie; Li, Guoqiang
2016-06-01
Film bulk acoustic wave resonator (FBAR) experienced skyrocketing development in the past 15 years, owing to the explosive development of mobile communication. It stands out in acoustic filters mainly because of high quality factor, which enables low insertion loss and sharp roll off. Except for the massive application in wireless communication, FBARs are also promising sensors because of the high sensitivity and readily integration ability to miniaturize circuits. On the ground of summarizing FBAR’s application in wireless communication as filters and in sensors including electronic nose, bio field, and pressure sensing, this paper review the main challenges of each application faced. The number of filters installed in the mobile phone has being grown explosively, which leads to overcrowded bands and put harsh requirements on component size and power consumption control for each unit. Data flow and rate are becoming increasingly demanding as well. This paper discusses three promising technical strategies addressing these issues. Among which coupled resonator filter is given intense attention because it is able to vigorously reduce the filter size by stacking two or more resonators together, and it is a great technique to increase data flow and rate. Temperature compensation methods are discussed considering their vital influence on frequency stability. Finally, materials improvement and novel materials exploration for band width modulation, tunable band acquisition, and quality factor improvement are discussed. The authors appeal attention of the academic society to bring AlN epitaxial thin film into the FBAR fabrication and have proposed a configuration to implement this idea.
NASA Astrophysics Data System (ADS)
Xiao, Zhongxiu
2018-04-01
A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.
NASA Astrophysics Data System (ADS)
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
NASA Astrophysics Data System (ADS)
Alvandipour, Mehrdad; Umbaugh, Scott E.; Mishra, Deependra K.; Dahal, Rohini; Lama, Norsang; Marino, Dominic J.; Sackman, Joseph
2017-05-01
Thermography and pattern classification techniques are used to classify three different pathologies in veterinary images. Thermographic images of both normal and diseased animals were provided by the Long Island Veterinary Specialists (LIVS). The three pathologies are ACL rupture disease, bone cancer, and feline hyperthyroid. The diagnosis of these diseases usually involves radiology and laboratory tests while the method that we propose uses thermographic images and image analysis techniques and is intended for use as a prescreening tool. Images in each category of pathologies are first filtered by Gabor filters and then various features are extracted and used for classification into normal and abnormal classes. Gabor filters are linear filters that can be characterized by the two parameters wavelength λ and orientation θ. With two different wavelength and five different orientations, a total of ten different filters were studied. Different combinations of camera views, filters, feature vectors, normalization methods, and classification methods, produce different tests that were examined and the sensitivity, specificity and success rate for each test were produced. Using the Gabor features alone, sensitivity, specificity, and overall success rates of 85% for each of the pathologies was achieved.
Wishart Deep Stacking Network for Fast POLSAR Image Classification.
Jiao, Licheng; Liu, Fang
2016-05-11
Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.
Capela, Nicole A; Lemaire, Edward D; Baddour, Natalie
2015-01-01
Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations.
2015-01-01
Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations. PMID:25885272
Ball, M.M.; Soderberg, N.K.
1989-01-01
In August 1979, the U.S. Geological Survey (USGS) aboard the M/V SEISMIC EXPLORER of Seismic Explorations International (SEI), ran 17 lines (1,270 km) of multichannel, seismic-reflection profiles on the western Florida Shelf. The main features of the SEI system were (1) a digital recorder with an instantaneous-floating-point-gain constant of 24 dB, (2) a 64-channel hydrophone streamer, 3,200 m long, and (3) a 21-airgun array that had a total volume of 2,000 in and a pressure of 2,000 psi. Sampling interval was array to the center of the farthest phone group was 3,338 m and to the nearest phone group, 188 m. Shot points were 5O m apart to obtain a 32-fold stack. Navigation was by an integrated satellite/Loran/doppler-sonar system.The SEI data were processed by Geophysical Data Processing Center, Inc. of Houston, Texas. Processing procedures were standard with the following exceptions: (1) a deringing deconvolution that had a 128-ms operator length was done prior to stacking. (2) a time-variant predictive deconvolution that had a filter operator length of 100 ms and automatic picking of the second zero-crossing was applied after stacking to further suppress multiple energy. (3) Velocity analyses were performed every 3 km, using a technique that included the determination and consideration of both the amount and direction of apparent dip. (4) Automatic gain ranging using a 750-ms window was applied pre- and post-stack. ( 5) Lines affected by sea floor's angle of slope were deconvolved again before stacking and time-variant filter parameters were adjusted to follow the sea-floor geometry.The data taken with the 3,200-m streamer and 2,000 in3 airgun array, aboard M/V SEISMIC EXPLORER (Arabic numerals) are vastly superior to those obtained by R/V GYRE using a much smaller streamer and source (Roman numerals). The former consistently show coherent primary events from within the units underlying the Mesozoic section on the western Florida Shelf, while the latter tend to do so only in the inshore area where pre-Mesozoic basement occurs at depths of less than 2 km. The R/V GYRE data were open filed previously (Ball and others, 1987). A synthesis of both sets of data is included in Ball and others (1988).Reflectors correlate to the full 8-s duration of recording time. A number of lines were restarted due to equipment failure; no areas were omitted, however, shotpoints overlap. The original records may be seen at the USGS branch of Atlantic marine geology offices in Woods Hole, Mass. Copies of the multichannel data may be purchased only from the National Geophysical Data Center, NOAA, Code E64, 325 Broadway, Boulder, CO 80303 (tel. 303/497-6345).
radon daughters is associated have greater ability to penetrate the variousfilter media than has the fission product debris in the atmosphere; therefore the former is associated with aerosols of smaller size. A preliminary evaluation of the techniques of employing packs of filters of different retentivity characteristics to determine the particle size and/or particle size distribution of radioactive aerosols has been made which indicates the feasibility of the method. It is recommended that a series of measurements be undertaken to determine the relative particle size
Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.
Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem
2006-06-01
Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.
Biophotonic markers of malignancy: Discriminating cancers using wavelength-specific biophotons.
Murugan, Nirosha J; Rouleau, Nicolas; Karbowski, Lukasz M; Persinger, Michael A
2018-03-01
Early detection is a critically important factor when successfully diagnosing and treating cancer. Whereas contemporary molecular techniques are capable of identifying biomarkers associated with cancer, surgical interventions are required to biopsy tissue. The common imaging alternative, positron-emission tomography (PET), involves the use of nuclear material which poses some risks. Novel, non-invasive techniques to assess the degree to which tissues express malignant properties are now needed. Recent developments in biophoton research have made it possible to discriminate cancerous cells from normal cells both in vitro and in vivo. The current study expands upon a growing body of literature where we classified and characterized malignant and non-malignant cell types according to their biophotonic activity. Using wavelength-exclusion filters, we demonstrate that ratios between infrared and ultraviolet photon emissions differentiate cancer and non-cancer cell types. Further, we identified photon sources associated with three filters (420-nm, 620-nm., and 950-nm) which classified cancer and non-cancer cell types. The temporal increases in biophoton emission within these wavelength bandwidths is shown to be coupled with intrisitic biomolecular events using Cosic's resonant recognition model. Together, the findings suggest that the use of wavelength-exclusion filters in biophotonic measurement can be employed to detect cancer in vitro.
NASA Astrophysics Data System (ADS)
Jen, Yi-Jun; Jhang, Yi-Ciang; Liu, Wei-Chih
2017-08-01
A multilayer that comprises ultra-thin metal and dielectric films has been investigated and applied as a layered metamaterial. By arranging metal and dielectric films alternatively and symmetrically, the equivalent admittance and refractive index can be tailored separately. The tailored admittance and refractive index enable us to design optical filters with more flexibility. The admittance matching is achieved via the admittance tracing in the normalized admittance diagram. In this work, an ultra-thin light absorber is designed as a multilayer composed of one or several cells. Each cell is a seven-layered film stack here. The design concept is to have the extinction as large as possible under the condition of admittance matching. For a seven-layered symmetrical film stack arranged as Ta2O5 (45 nm)/ a-Si (17 nm)/ Cr (30 nm)/ Al (30 nm)/ Cr (30 nm)/ a-Si (17 nm)/ Ta2O5 (45 nm), its mean equivalent admittance and extinction coefficient over the visible regime is 1.4+0.2i and 2.15, respectively. The unit cell on a transparent BK7 glass substrate absorbs 99% of normally incident light energy for the incident medium is glass. On the other hand, a transmission-induced metal-dielectric film stack is investigated by using the admittance matching method. The equivalent anisotropic property of the metal-dielectric multilayer varied with wavelength and nanostructure are investigated here.
Pre-Processing and Cross-Correlation Techniques for Time-Distance Helioseismology
NASA Astrophysics Data System (ADS)
Wang, N.; de Ridder, S.; Zhao, J.
2014-12-01
In chaotic wave fields excited by a random distribution of noise sources a cross-correlation of the recordings made at two stations yield the interstation wave-field response. After early successes in helioseismology, laboratory studies and earth-seismology, this technique found broad application in global and regional seismology. This development came with an increasing understanding of pre-processing and cross-correlation workflows to yield an optimal signal-to-noise ratio (SNR). Helioseismologist rely heavily on stacking to increase the SNR. Until now, they have not studied different spectral-whitening and cross-correlation workflows and relies heavily on stacking to increase the SNR. The recordings vary considerably between sunspots and regular portions of the sun. Within the sunspot the periodic effects of the observation satellite orbit are difficult to remove. We remove a running alpha-mean from the data and apply a soft clip to deal with data glitches. The recordings contain energy of both flow and waves. A frequency domain filter selects the wave energy. Then the data is input to several pre-processing and cross-correlation techniques, common to earth seismology. We anticipate that spectral whitening will flatten the energy spectrum of the cross-correlations. We also expect that the cross-correlations converge faster to their expected value when the data is processed over overlapping windows. The result of this study are expected to aid in decreasing the stacking while maintaining good SNR.
Temporal changes in shear velocity from ambient noise at New Zealand geothermal fields
NASA Astrophysics Data System (ADS)
Civilini, F.; Savage, M. K.; Townend, J.
2016-12-01
We use ambient noise to compare shear velocity changes with geothermal production processes at the Ngatamariki and Rotokawa geothermal fields, located in the central North Island of New Zealand. We calculate shear velocity changes through an analysis of cross correlation functions of diffusive seismic wavefields between stations, which are proportional to Green's functions of the station path. Electricity production at Ngatamariki uses an 82 MW binary type power station manufactured by Ormat Technologies, which began operations in mid-2013 and is owned and operated by Mighty River Power. The "Nga Awa Purua" triple flash power plant at the Rotokawa geothermal field was established in 2010 with parnership between Mighty River Power and Tauhara North No. 2 trust and currently operates 174 MW of generation. The seismometers of both networks, deployed primarily to observe microseismicity within the field, were installed prior to well stimulation and the start of production. Although cultural noise dominates the energy spectrum, a strong natural ambient noise signal can be detected when filtering below 1 Hz. Despite similar noise settings, the signal-to-noise ratio of cross correlation stacks at Rotokawa was more than two times greater than at Ngatamariki. We use stacks of cross correlations between stations prior to the onset of production as references, and compare them with cross correlations of moving stacks in time periods of well stimulation and the onset of electricity production.
NASA Astrophysics Data System (ADS)
Chen, Yangyang; Huang, Guoliang
2017-04-01
A great deal of research has been devoted to controlling the dynamic behaviors of phononic crystals and metamaterials by directly tuning the frequency regions and/or widths of their inherent band gaps. Here, we present a novel approach to achieve extremely broadband flexural wave/vibration attenuation based on tunable local resonators made of piezoelectric stacks shunted by hybrid negative capacitance and negative inductance circuits with proof masses attached on a host beam. First, wave dispersion relations of the adaptive metamaterial beam are calculated analytically by using the transfer matrix method. The unique modulus tuning properties induced by the hybrid shunting circuits are then characterized conceptually, from which the frequency dependent modulus tuning curves of the piezoelectric stack located within wave attenuation frequency regions are quantitatively identified. As an example, a flexural wave high-pass band filter with a wave attenuation region from 0 to 23.0 kHz is demonstrated analytically and numerically by using the hybrid shunting circuit, in which the two electric components are connected in series. By changing the connection pattern to be parallel, another super wide wave attenuation region from 13.5 to 73.0 kHz is demonstrated to function as a low-pass filter at a subwavelength scale. The proposed adaptive metamaterial possesses a super wide band gap created both naturally and artificially. Therefore, it can be used for the transient wave mitigation at extremely broadband frequencies such as blast or impact loadings. We envision that the proposed design and approach can open many possibilities in broadband vibration and wave control.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Determining Pu-239 content by resonance transmission analysis using a filtered reactor beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klann, R. T.
A novel technique has been developed at Argonne National Laboratory to determine the {sup 239}Pu content in EBR-II blanket elements using resonance transmission analysis (RTA) with a filtered reactor beam. The technique uses cadmium and gadolinium filters along with a {sup 239}Pu fission chamber to isolate the 0.3 eV resonance in {sup 239}Pu. In the energy range from 0.1 to 0.5 eV, the total microscopic cross-section of {sup 239}Pu is significantly larger than the cross-sections of {sup 238}U and {sup 235}U. This large difference in cross-section allows small amounts of {sup 239}Pu to be detected in uranium samples. Tests usingmore » a direct beam from a 250 kW TRIGA reactor have been performed with stacks of depleted uranium and {sup 239}Pu foils. Preliminary measurement results are in good agreement with the predicted results up to about two weight percent of {sup 239}Pu in the sample. In addition, measured {sup 239}Pu masses were in agreement with actual sample masses with uncertainties less than 3.8 percent.« less
ISO Guest Observer Data Analysis and LWS Instrument Team Activities
NASA Technical Reports Server (NTRS)
Smith, Howard
2002-01-01
This project was granted a no-cost extension prompted by the request of the major subcontractor, the Naval Research Laboratory, which had not yet completed its tasks. As of July 2002, they had made substantial progress. They have successfully fabricated a metal mesh grid on polyimide, and also successfully fabricated a 2-layer metal mesh infrared filter using stacks of these metal mesh grids on polyimide; the actual layering was done at SAO. Both warm and cold spectroscopic tests were done on these fabricated devices. The measurements were in good agreement with the theory, and also reasonable performance in absolute terms. NRL is now working on fabricating a 3-layer metal mesh infrared filter, and a prototype is expected in the next month. Testing should occur before the end of the fiscal year. Finally, NRL has preliminarily agreed to hire a new postdoctoral person to refine the modeling of the filters based on the new measurements. The person should arrive this fall. NRL has a new Fourier Transform Spectrometer which will be delivered in the next month, and which will be used to facilitate the testing which has up to now been done in collaboration with NASA Goddard Space Flight Space Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trichandi, Rahmantara, E-mail: rachmantara.tri@gmail.com; Yudistira, Tedi; Nugraha, Andri Dian
Ambient noise tomography is relatively a new method for imaging the shallow structure of the Earth subsurface. We presents the application of this method to produce a Rayleigh wave group velocity maps around the Merapi Volcano, Central Java. Rayleigh waves group velocity maps were reconstructed from the cross-correlation of ambient noise recorded by the DOMERAPI array which consists 43 broadband seismometers. In the processing stage, we first filtered the observation data to separatethe noise from the signal that dominated by the strong volcanic activities. Next, we cross-correlate the filtered data and stack to obtain the Green’s function for all possiblemore » station pairs. Then we carefully picked the peak of each Green’s function to estimate the dispersion trend and appliedMultiple Filter Technique to obtain the dispersion curve. Inter-station group velocity curvesare inverted to produceRayleigh wave group velocity maps for periods 1 to 10 s. The resulted Rayleigh group velocity maps show the interesting features around the Merapi Volcano which generally agree with the previous studies. Merapi-Lawu Anomaly (MLA) is emerged as a relatively low anomaly in our group velocity maps.« less
Benchmark studies on the building blocks of DNA. 3. Watson-Crick and stacked base pairs.
Szalay, Péter G; Watson, Thomas; Perera, Ajith; Lotrich, Victor; Bartlett, Rodney J
2013-04-18
Excited states of stacked adenine-thymine and guanine-cytosine pairs as well as the Watson-Crick pair of guanine-thymine have been investigated using the equation of motion coupled-cluster (EOM-CC) method with single and double as well as approximate triple excitations. Transitions have been assigned, and the form of the excitations has been analyzed. The majority of the excitations could be classified as localized on the nucleobases, but for all three studied systems, charge-transfer (CT) transitions could also be identified. The main aim of this study was to compare the performance of lower-level methods (ADC(2) and TDDFT) to the high-level EOM-CC ones. It was shown that both ADC(2) and TDDFT with long-range correction have nonsystematic error in excitation energies, causing alternation of the energetic ordering of the excitations. Considering the high costs of the EOM-CC calculations, there is a need for reliable new approximate methods.
Kalman Filter for Calibrating a Telescope Focal Plane
NASA Technical Reports Server (NTRS)
Kang, Bryan; Bayard, David
2006-01-01
The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.
Compact Focal Plane Assembly for Planetary Science
NASA Technical Reports Server (NTRS)
Brown, Ari; Aslam, Shahid; Huang, Wei-Chung; Steptoe-Jackson, Rosalind
2013-01-01
A compact radiometric focal plane assembly (FPA) has been designed in which the filters are individually co-registered over compact thermopile pixels. This allows for construction of an ultralightweight and compact radiometric instrument. The FPA also incorporates micromachined baffles in order to mitigate crosstalk and low-pass filter windows in order to eliminate high-frequency radiation. Compact metal mesh bandpass filters were fabricated for the far infrared (FIR) spectral range (17 to 100 microns), a game-changing technology for future planetary FIR instruments. This fabrication approach allows the dimensions of individual metal mesh filters to be tailored with better than 10- micron precision. In contrast, conventional compact filters employed in recent missions and in near-term instruments consist of large filter sheets manually cut into much smaller pieces, which is a much less precise and much more labor-intensive, expensive, and difficult process. Filter performance was validated by integrating them with thermopile arrays. Demonstration of the FPA will require the integration of two technologies. The first technology is compact, lightweight, robust against cryogenic thermal cycling, and radiation-hard micromachined bandpass filters. They consist of a copper mesh supported on a deep reactive ion-etched silicon frame. This design architecture is advantageous when constructing a lightweight and compact instrument because (1) the frame acts like a jig and facilitates filter integration with the FPA, (2) the frame can be designed so as to maximize the FPA field of view, (3) the frame can be simultaneously used as a baffle for mitigating crosstalk, and (4) micron-scale alignment features can be patterned so as to permit high-precision filter stacking and, consequently, increase the filter bandwidth and sharpen the out-of-band rolloff. The second technology consists of leveraging, from another project, compact and lightweight Bi0.87Sb0.13/Sb arrayed thermopiles. These detectors consist of 30-layer thermopiles deposited in series upon a silicon nitride membrane. At 300 K, the thermopile arrays are highly linear over many orders of magnitude of incident IR power, and have a reported specific detectivity that exceeds the requirements imposed on future mission concepts. The bandpass filter array board is integrated with a thermopile array board by mounting both boards on a machined aluminum jig.
High-temperature sapphire optical sensor fiber coatings
NASA Astrophysics Data System (ADS)
Desu, Seshu B.; Claus, Richard O.; Raheem, Ruby; Murphy, Kent A.
1990-10-01
Advanced coal-fired power generation systems, such as pressurized fluidized-bed combustors and integrated gasifier-combined cycles, may provide cost effective future alternatives for power generation, improve our utilization of coal resources, and decrease our dependence upon oil and gas. When coal is burned or converted to combustible gas to produce energy, mineral matter and chemical compounds are released as solid and gaseous contaminants. The control of contaminants is mandatory to prevent pollution as well as degradation of equipment in advanced power generation. To eliminate the need for expensive heat recovery equipment and to avoid efficiency losses it is desirable to develop a technology capable of cleaning the hot gas. For this technology the removal of particle contaminants is of major concern. Several prototype high temperature particle filters have been developed, including ceramic candle filters, ceramic bag filters, and ceramic cross-flow (CXF) filters. Ceramic candle filters are rigid, tubular filters typically made by bonding silicon carbide or alumina-silica grains with clay bonding materials and perhaps including alumina-silica fibers. Ceramic bag filters are flexible and are made from long ceramic fibers such as alumina-silica. CXF filters are rigid filters made of stacks of individual lamina through which the dirty and clean gases flow in cross-wise directions. CXF filters are advantageous for hot gas cleanup applications since they offer a large effective filter surface per unit volume. The relatively small size of the filters allows the pressurized vessel containing them to be small, thus reducing potential equipment costs. CXF filters have shown promise but have experienced degradation at normal operational high temperatures (close to 1173K) and high pressures (up to 24 bars). Observed degradation modes include delamination of the individual tile layers, cracking at either the tile-torid interface or at the mounting flange, or plugging of the filter. These modes may be attributed to a number of material degradation mechanisms, such as thermal shock, oxidation corrosion of the material, mechanical loads, or phase changes in the filter material. Development of high temperature optical fiber (sapphire) sensors embedded in the CXF filters would be very valuable for both monitoring the integrity of the filter during its use and understanding the mechanisms of degradation such that durable filter development will be facilitated. Since the filter operating environment is very harsh, the high temperature sapphire optical fibers need to be protected and for some sensing techniques the fiber must also be coated with low refractive index film (cladding). The objective of the present study is to identify materials and develop process technologies for the application of claddings and protective coatings that are stable and compatible with sapphire fibers at both high temperatures and pressures.
Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier
Min, Yang; Zhu, Dingju
2014-01-01
Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier. PMID:25003153
NASA Astrophysics Data System (ADS)
Sasaki, Kenya; Mitani, Yoshihiro; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao
2017-02-01
In this paper, in order to classify liver cirrhosis on regions of interest (ROIs) images from B-mode ultrasound images, we have proposed to use the higher order local autocorrelation (HLAC) features. In a previous study, we tried to classify liver cirrhosis by using a Gabor filter based approach. However, the classification performance of the Gabor feature was poor from our preliminary experimental results. In order accurately to classify liver cirrhosis, we examined to use the HLAC features for liver cirrhosis classification. The experimental results show the effectiveness of HLAC features compared with the Gabor feature. Furthermore, by using a binary image made by an adaptive thresholding method, the classification performance of HLAC features has improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polan, D; Brady, S; Kaufman, R
2016-06-15
Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using anmore » anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT protocol parameters with an average DSC of 0.86 ± 0.04 (range: 0.80–0.99).« less
USDA-ARS?s Scientific Manuscript database
An acousto-optic tunable filter-based hyperspectral microscope imaging method has potential for identification of foodborne pathogenic bacteria from microcolony rapidly with a single cell level. We have successfully developed the method to acquire quality hyperspectral microscopic images from variou...
supernovae: Photometric classification of supernovae
NASA Astrophysics Data System (ADS)
Charnock, Tom; Moss, Adam
2017-05-01
Supernovae classifies supernovae using their light curves directly as inputs to a deep recurrent neural network, which learns information from the sequence of observations. Observational time and filter fluxes are used as inputs; since the inputs are agnostic, additional data such as host galaxy information can also be included.
On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.
2004-12-01
The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.
Optimization of a Multi-Stage ATR System for Small Target Identification
NASA Technical Reports Server (NTRS)
Lin, Tsung-Han; Lu, Thomas; Braun, Henry; Edens, Western; Zhang, Yuhan; Chao, Tien- Hsin; Assad, Christopher; Huntsberger, Terrance
2010-01-01
An Automated Target Recognition system (ATR) was developed to locate and target small object in images and videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof- interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI contains the desired target or not. The ATR system was found useful in identifying small boats in open sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for generalization of representative features to reduce false-alarm rate. The neural networks are compared for their performance in classification accuracy, classifying time, and training time.
Local feature saliency classifier for real-time intrusion monitoring
NASA Astrophysics Data System (ADS)
Buch, Norbert; Velastin, Sergio A.
2014-07-01
We propose a texture saliency classifier to detect people in a video frame by identifying salient texture regions. The image is classified into foreground and background in real time. No temporal image information is used during the classification. The system is used for the task of detecting people entering a sterile zone, which is a common scenario for visual surveillance. Testing is performed on the Imagery Library for Intelligent Detection Systems sterile zone benchmark dataset of the United Kingdom's Home Office. The basic classifier is extended by fusing its output with simple motion information, which significantly outperforms standard motion tracking. A lower detection time can be achieved by combining texture classification with Kalman filtering. The fusion approach running at 10 fps gives the highest result of F1=0.92 for the 24-h test dataset. The paper concludes with a detailed analysis of the computation time required for the different parts of the algorithm.
Broadband diffractive lens or imaging element
Ceglio, Natale M.; Hawryluk, Andrew M.; London, Richard A.; Seppala, Lynn G.
1993-01-01
A broadband diffractive lens or imaging element produces a sharp focus and/or a high resolution image with broad bandwidth illuminating radiation. The diffractive lens is sectored or segmented into regions, each of which focuses or images a distinct narrowband of radiation but all of which have a common focal length. Alternatively, a serial stack of minus filters, each with a diffraction pattern which focuses or images a distinct narrowband of radiation but all of which have a common focal length, is used. The two approaches can be combined. Multifocal broadband diffractive elements can also be formed. Thin film embodiments are described.
Broadband diffractive lens or imaging element
Ceglio, N.M.; Hawryluk, A.M.; London, R.A.; Seppala, L.G.
1993-10-26
A broadband diffractive lens or imaging element produces a sharp focus and/or a high resolution image with broad bandwidth illuminating radiation. The diffractive lens is sectored or segmented into regions, each of which focuses or images a distinct narrowband of radiation but all of which have a common focal length. Alternatively, a serial stack of minus filters, each with a diffraction pattern which focuses or images a distinct narrowband of radiation but all of which have a common focal length, is used. The two approaches can be combined. Multifocal broadband diffractive elements can also be formed. Thin film embodiments are described. 21 figures.
16. Contextual view of the 100B Area, looking toward the ...
16. Contextual view of the 100-B Area, looking toward the northeast in December 1944. The River Pump House is in the distance on the river (left of center); the 184-B Power House stands with its two tall stacks, its Coal Storage Pond (to its left), and its 188-B Ash Disposal Basin (towards the river). Also seen are the 182-B Reservoir (foreground on the left), the 183-B Filter Plant (foreground right of center), and the 107-B Retention Basin (upper right near the river). P-7835 - B Reactor, Richland, Benton County, WA
Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan
2013-01-01
Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.
NASA Astrophysics Data System (ADS)
Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.
2016-06-01
This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.
Filtering big data from social media--Building an early warning system for adverse drug reactions.
Yang, Ming; Kiang, Melody; Shang, Wei
2015-04-01
Adverse drug reactions (ADRs) are believed to be a leading cause of death in the world. Pharmacovigilance systems are aimed at early detection of ADRs. With the popularity of social media, Web forums and discussion boards become important sources of data for consumers to share their drug use experience, as a result may provide useful information on drugs and their adverse reactions. In this study, we propose an automated ADR related posts filtering mechanism using text classification methods. In real-life settings, ADR related messages are highly distributed in social media, while non-ADR related messages are unspecific and topically diverse. It is expensive to manually label a large amount of ADR related messages (positive examples) and non-ADR related messages (negative examples) to train classification systems. To mitigate this challenge, we examine the use of a partially supervised learning classification method to automate the process. We propose a novel pharmacovigilance system leveraging a Latent Dirichlet Allocation modeling module and a partially supervised classification approach. We select drugs with more than 500 threads of discussion, and collect all the original posts and comments of these drugs using an automatic Web spidering program as the text corpus. Various classifiers were trained by varying the number of positive examples and the number of topics. The trained classifiers were applied to 3000 posts published over 60 days. Top-ranked posts from each classifier were pooled and the resulting set of 300 posts was reviewed by a domain expert to evaluate the classifiers. Compare to the alternative approaches using supervised learning methods and three general purpose partially supervised learning methods, our approach performs significantly better in terms of precision, recall, and the F measure (the harmonic mean of precision and recall), based on a computational experiment using online discussion threads from Medhelp. Our design provides satisfactory performance in identifying ADR related posts for post-marketing drug surveillance. The overall design of our system also points out a potentially fruitful direction for building other early warning systems that need to filter big data from social media networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin
2011-11-01
With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.
The influence of passband limitation on the waveform of extracellular action potential.
Mizuhiki, Takashi; Inaba, Kiyonori; Setogawa, Tsuyoshi; Toda, Koji; Ozaki, Shigeru; Shidara, Muneteka
2012-03-01
The duration of the extracellular action potential (EAP) in single neuronal recording has often been used as a clue to infer biochemical, physiological or functional substrate of the recorded neurons, e.g. neurochemical type. However, when recording a neuronal activity, the high-pass filter is routinely used to achieve higher signal-to-noise ratio. Signal processing theory predicts that passband limitation stretches the waveform of discrete brief impulse. To examine whether the duration of filtered EAP could be the reliable measure, we investigated the influence of high-pass filter both by simulation and unfiltered unit recording data from monkey dorsal raphe. Consistent with the findings in recent theoretical study, the unfiltered EAPs displayed the sharp wave without following bumps. The duration of unfiltered EAP was not correlated with that of filtered EAP. Thus the duration of original EAP cannot be estimated from filtered EAP. It is needed to reexamine the EAP duration measured for classifying the neurons whose activities were recorded under the passband limitation in the related studies. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
NASA Astrophysics Data System (ADS)
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlangga, Mokhammad Puput
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less
Stacked Multilayer Self-Organizing Map for Background Modeling.
Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun
2015-09-01
In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach.
Challenges in discriminating profanity from hate speech
NASA Astrophysics Data System (ADS)
Malmasi, Shervin; Zampieri, Marcos
2018-03-01
In this study, we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes ?-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalisation, achieving the best result of ? accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface ?-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.
An Investigation into the Use of Spatially-Filtered Fourier Transforms to Classify Mammary Lesions.
difference in Fourier space between lesioned breast tissue which would enable accurate computer classification of benign and malignant lesions. Low...separate benign and malignant breast tissue. However, no success was achieved when using two-dimensional Fourier transform and power spectrum analysis. (Author)
van Dongen, M J; Mooren, M M; Willems, E F; van der Marel, G A; van Boom, J H; Wijmenga, S S; Hilbers, C W
1997-01-01
The three-dimensional structure of the hairpin formed by d(ATCCTA-GTTA-TAGGAT) has been determined by means of two-dimensional NMR studies, distance geometry and molecular dynamics calculations. The first and the last residues of the tetraloop of this hairpin form a sheared G-A base pair on top of the six Watson-Crick base pairs in the stem. The glycosidic torsion angles of the guanine and adenine residues in the G-A base pair reside in the anti and high- anti domain ( approximately -60 degrees ) respectively. Several dihedral angles in the loop adopt non-standard values to accommodate this base pair. The first and second residue in the loop are stacked in a more or less normal helical fashion; the fourth loop residue also stacks upon the stem, while the third residue is directed away from the loop region. The loop structure can be classified as a so-called type-I loop, in which the bases at the 5'-end of the loop stack in a continuous fashion. In this situation, loop stability is unlikely to depend heavily on the nature of the unpaired bases in the loop. Moreover, the present study indicates that the influence of the polarity of a closing A.T pair is much less significant than that of a closing C.G base pair. PMID:9092659
MScanner: a classifier for retrieving Medline citations
Poulter, Graham L; Rubin, Daniel L; Altman, Russ B; Seoighe, Cathal
2008-01-01
Background Keyword searching through PubMed and other systems is the standard means of retrieving information from Medline. However, ad-hoc retrieval systems do not meet all of the needs of databases that curate information from literature, or of text miners developing a corpus on a topic that has many terms indicative of relevance. Several databases have developed supervised learning methods that operate on a filtered subset of Medline, to classify Medline records so that fewer articles have to be manually reviewed for relevance. A few studies have considered generalisation of Medline classification to operate on the entire Medline database in a non-domain-specific manner, but existing applications lack speed, available implementations, or a means to measure performance in new domains. Results MScanner is an implementation of a Bayesian classifier that provides a simple web interface for submitting a corpus of relevant training examples in the form of PubMed IDs and returning results ranked by decreasing probability of relevance. For maximum speed it uses the Medical Subject Headings (MeSH) and journal of publication as a concise document representation, and takes roughly 90 seconds to return results against the 16 million records in Medline. The web interface provides interactive exploration of the results, and cross validated performance evaluation on the relevant input against a random subset of Medline. We describe the classifier implementation, cross validate it on three domain-specific topics, and compare its performance to that of an expert PubMed query for a complex topic. In cross validation on the three sample topics against 100,000 random articles, the classifier achieved excellent separation of relevant and irrelevant article score distributions, ROC areas between 0.97 and 0.99, and averaged precision between 0.69 and 0.92. Conclusion MScanner is an effective non-domain-specific classifier that operates on the entire Medline database, and is suited to retrieving topics for which many features may indicate relevance. Its web interface simplifies the task of classifying Medline citations, compared to building a pre-filter and classifier specific to the topic. The data sets and open source code used to obtain the results in this paper are available on-line and as supplementary material, and the web interface may be accessed at . PMID:18284683
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.
2009-08-01
Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.
Data quality enhancement and knowledge discovery from relevant signals in acoustic emission
NASA Astrophysics Data System (ADS)
Mejia, Felipe; Shyu, Mei-Ling; Nanni, Antonio
2015-10-01
The increasing popularity of structural health monitoring has brought with it a growing need for automated data management and data analysis tools. Of great importance are filters that can systematically detect unwanted signals in acoustic emission datasets. This study presents a semi-supervised data mining scheme that detects data belonging to unfamiliar distributions. This type of outlier detection scheme is useful detecting the presence of new acoustic emission sources, given a training dataset of unwanted signals. In addition to classifying new observations (herein referred to as "outliers") within a dataset, the scheme generates a decision tree that classifies sub-clusters within the outlier context set. The obtained tree can be interpreted as a series of characterization rules for newly-observed data, and they can potentially describe the basic structure of different modes within the outlier distribution. The data mining scheme is first validated on a synthetic dataset, and an attempt is made to confirm the algorithms' ability to discriminate outlier acoustic emission sources from a controlled pencil-lead-break experiment. Finally, the scheme is applied to data from two fatigue crack-growth steel specimens, where it is shown that extracted rules can adequately describe crack-growth related acoustic emission sources while filtering out background "noise." Results show promising performance in filter generation, thereby allowing analysts to extract, characterize, and focus only on meaningful signals.
NASA Astrophysics Data System (ADS)
Moody, D.; Brumby, S. P.; Chartrand, R.; Franco, E.; Keisler, R.; Kelton, T.; Kontgis, C.; Mathis, M.; Raleigh, D.; Rudelis, X.; Skillman, S.; Warren, M. S.; Longbotham, N.
2016-12-01
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Historical, multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes per year of high-resolution imagery with daily global coverage. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail. We have assembled all available satellite imagery from the USGS Landsat, NASA MODIS, and ESA Sentinel programs, as well as commercial PlanetScope and RapidEye imagery, and have analyzed over 2.8 quadrillion multispectral pixels. We leveraged the commercial cloud to generate a tiled, spatio-temporal mosaic of the Earth for fast iteration and development of new algorithms combining analysis techniques from remote sensing, machine learning, and scalable compute infrastructure. Our data platform enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis. We demonstrate our data platform capability by using the European Space Agency's (ESA) published 2006 and 2009 GlobCover 20+ category label maps to train and test a Land Cover Land Use (LCLU) classifier, and generate current self-consistent LCLU maps in Brazil. We train a standard classifier on 2006 GlobCover categories using temporal imagery stacks, and we validate our results on co-registered 2009 Globcover LCLU maps and 2009 imagery. We then extend the derived LCLU model to current imagery stacks to generate an updated, in-season label map. Changes in LCLU labels can now be seamlessly monitored for a given location across the years in order to track, for example, cropland expansion, forest growth, and urban developments. An example of change monitoring is illustrated in the included figure showing rainfed cropland change in the Mato Grosso region of Brazil between 2006 and 2009.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Astronomy with the color blind
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Melrose, Justyn
2014-12-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the field, although one should be cautious in assuming that such an image shows what the subject would "really look like" if a person could see it without the aid of a telescope. The details of how the eye processes light have a significant impact on how such images should be understood, and the step from perception to interpretation is even more problematic when the viewer is color blind. We report here on an approach to manipulating stacked tricolor images that, while abandoning attempts to portray the color distribution "realistically," do result in enabling those suffering from deuteranomaly (the most common form of color blindness) to perceive color distinctions they would otherwise not be able to see.
Climatic variability in Princess Elizabeth Land (East Antarctica) over the last 350 years
NASA Astrophysics Data System (ADS)
Ekaykin, Alexey A.; Vladimirova, Diana O.; Lipenkov, Vladimir Y.; Masson-Delmotte, Valérie
2017-01-01
We use isotopic composition (δD) data from six sites in Princess Elizabeth Land (PEL) in order to reconstruct air temperature variability in this sector of East Antarctica over the last 350 years. First, we use the present-day instrumental mean annual surface air temperature data to demonstrate that the studied region (between Russia's Progress, Vostok and Mirny research stations) is characterized by uniform temperature variability. We thus construct a stacked record of the temperature anomaly for the whole sector for the period of 1958-2015. A comparison of this series with the Southern Hemisphere climatic indices shows that the short-term inter-annual temperature variability is primarily governed by the Antarctic Oscillation (AAO) and Interdecadal Pacific Oscillation (IPO) modes of atmospheric variability. However, the low-frequency temperature variability (with period > 27 years) is mainly related to the anomalies of the Indian Ocean Dipole (IOD) mode. We then construct a stacked record of δD for the PEL for the period of 1654-2009 from individual normalized and filtered isotopic records obtained at six different sites (PEL2016
stacked record). We use a linear regression of this record and the stacked PEL temperature record (with an apparent slope of 9 ± 5.4 ‰ °C-1) to convert PEL2016 into a temperature scale. Analysis of PEL2016 shows a 1 ± 0.6 °C warming in this region over the last 3 centuries, with a particularly cold period from the mid-18th to the mid-19th century. A peak of cooling occurred in the 1840s - a feature previously observed in other Antarctic records. We reveal that PEL2016 correlates with a low-frequency component of IOD and suggest that the IOD mode influences the Antarctic climate by modulating the activity of cyclones that bring heat and moisture to Antarctica. We also compare PEL2016 with other Antarctic stacked isotopic records. This work is a contribution to the PAGES (Past Global Changes) and IPICS (International Partnerships in Ice Core Sciences) Antarctica 2k projects.
Nitrogen oxides from waste incineration: control by selective non-catalytic reduction.
Zandaryaa, S; Gavasci, R; Lombardi, F; Fiore, A
2001-01-01
An experimental study of the selective non-catalytic reduction (SNCR) process was carried out to determine the efficiency of NOx removal and NH3 mass balance, the NOx reducing reagent used. Experimental tests were conducted on a full-scale SNCR system installed in a hospital waste incineration plant. Anhydrous NH3 was injected at the boiler entrance for NOx removal. Ammonia was analyzed after each flue-gas treatment unit in order to establish its mass balance and NH3 slip in the stack gas was monitored as well. The effective fraction of NH3 for the thermal NOx reduction was calculated from measured values of injected and residual NH3. Results show that a NOx reduction efficiency in the range of 46.7-76.7% is possible at a NH3/NO molar ratio of 0.9-1.5. The fraction of NH3 used in NOx removal was found to decrease with rising NH3/NO molar ratio. The NH3 slip in the stack gas was very low, below permitted limits, even at the higher NH3 dosages used. No direct correlation was found between the NH3/NO molar ratio and the NH3 slip in the stack gas since the major part of the residual NH3 was converted into ammonium salts in the dry scrubbing reactor and subsequently collected in the fabric filter. Moreover, another fraction of NH3 was dissolved in the scrubbing liquor.
Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel
2016-01-01
This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422
Finding knowledge translation articles in CINAHL.
Lokker, Cynthia; McKibbon, K Ann; Wilczynski, Nancy L; Haynes, R Brian; Ciliska, Donna; Dobbins, Maureen; Davis, David A; Straus, Sharon E
2010-01-01
The process of moving research into practice has a number of names including knowledge translation (KT). Researchers and decision makers need to be able to readily access the literature on KT for the field to grow and to evaluate the existing evidence. To develop and validate search filters for finding KT articles in the database Cumulative Index to Nursing and Allied Health (CINAHL). A gold standard database was constructed by hand searching and classifying articles from 12 journals as KT Content, KT Applications and KT Theory. Sensitivity, specificity, precision, and accuracy of the search filters. Optimized search filters had fairly low sensitivity and specificity for KT Content (58.4% and 64.9% respectively), while sensitivity and specificity increased for retrieving KT Application (67.5% and 70.2%) and KT Theory articles (70.4% and 77.8%). Search filter performance was suboptimal marking the broad base of disciplines and vocabularies used by KT researchers. Such diversity makes retrieval of KT studies in CINAHL difficult.
Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.
Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish
2006-01-01
Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.
Dimensional Representation and Gradient Boosting for Seismic Event Classification
NASA Astrophysics Data System (ADS)
Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.
Wadhwa, Vibhor; Trivedi, Premal S; Ali, Sumera; Ryu, Robert K; Pezeshkmehr, Amir
2018-02-01
Inferior vena cava (IVC) filter placement in children has been described in literature, but there is variability with regard to their indications. No nationally representative study has been done to compare practice patterns of filter placements at adult and children's hospitals. To perform a nationally representative comparison of IVC filter placement practices in children at adult and children's hospitals. The 2012 Kids' Inpatient Database was searched for IVC filter placements in children <18 years of age. Using the International Classification of Diseases, 9th Revision (ICD-9) code for filter insertion (38.7), IVC filter placements were identified. A small number of children with congenital cardiovascular anomalies codes were excluded to improve specificity of the code used to identify filter placement. Filter placements were further classified by patient demographics, hospital type (children's and adult), United States geographic region, urban/rural location, and teaching status. Statistical significance of differences between children's or adult hospitals was determined using the Wilcoxon rank sum test. A total of 618 IVC filter placements were identified in children <18 years (367 males, 251 females, age range: 5-18 years) during 2012. The majority of placements occurred in adult hospitals (573/618, 92.7%). Significantly more filters were placed in the setting of venous thromboembolism in children's hospitals (40/44, 90%) compared to adult hospitals (246/573, 43%) (P<0.001). Prophylactic filters comprised 327/573 (57%) at adult hospitals, with trauma being the most common indication (301/327, 92%). The mean length of stay for patients receiving filters was 24.5 days in children's hospitals and 18.4 days in adult hospitals. The majority of IVC filters in children are placed in adult hospital settings. Children's hospitals are more likely to place therapeutic filters for venous thromboembolism, compared to adult hospitals where the prophylactic setting of trauma predominates.
NASA Astrophysics Data System (ADS)
Gonzalez, Pablo J.
2017-04-01
Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033
Public Release of Pan-STARRS Data
NASA Astrophysics Data System (ADS)
Flewelling, Heather; Consortium, panstarrs
2015-08-01
Pan-STARRS 1 is a 1.8 meter survey telescope, located on Haleakala, Hawaii, with a 1.4 Gigapixel camera, a 7 square degree field of view, and 5 filters (g,r,i,z,y). The public release of data, which is available to everyone, consists of 4 years of data taken between May 2010 and April 2014. Two of the surveys available in the public release are the 3pi survey and the Medium Deep (MD) survey. The 3pi survey has roughly 60 epochs (12 per filter) covering 3/4 of the sky and everything north of -30 degrees declination. The MD survey consists of 10 fields, observed in a couple of filters each night, usually 8 exposures per filter per field, for about 4000 epochs per MD field. The available data product are accessed through the “Postage Stamp Server” and through the Published Science Products Subsystem (PSPS), both of these are available through the Pan-STARRS Science Interface (PSI). The Postage Stamp Server provides images and catalogs for different stages of processing on single exposures, stack images, difference images, and forced photometry. The PSPS is a SQLServer database that can be queried via script or web interface, with a database for each MD field and a large database for the 3pi survey. This database has relative photometry and astrometry and object associations, making it easy to do searches across the entire sky as well as tools to generate lightcurves of individual objects as a function of time.
USDA-ARS?s Scientific Manuscript database
Biodiesel is composed of mono-alkyl fatty acid esters made from the transesterification of vegetable oil or animal fat with methanol or ethanol. Biodiesel must meet rigorous standard fuel specifications (ASTM D 6751; CEN EN 14214) to be classified as an alternative fuel. Nevertheless, biodiesel that...
A Machine Learning Classifier for Fast Radio Burst Detection at the VLBA
NASA Astrophysics Data System (ADS)
Wagstaff, Kiri L.; Tang, Benyang; Thompson, David R.; Khudikyan, Shakeh; Wyngaard, Jane; Deller, Adam T.; Palaniswamy, Divya; Tingay, Steven J.; Wayth, Randall B.
2016-08-01
Time domain radio astronomy observing campaigns frequently generate large volumes of data. Our goal is to develop automated methods that can identify events of interest buried within the larger data stream. The V-FASTR fast transient system was designed to detect rare fast radio bursts within data collected by the Very Long Baseline Array. The resulting event candidates constitute a significant burden in terms of subsequent human reviewing time. We have trained and deployed a machine learning classifier that marks each candidate detection as a pulse from a known pulsar, an artifact due to radio frequency interference, or a potential new discovery. The classifier maintains high reliability by restricting its predictions to those with at least 90% confidence. We have also implemented several efficiency and usability improvements to the V-FASTR web-based candidate review system. Overall, we found that time spent reviewing decreased and the fraction of interesting candidates increased. The classifier now classifies (and therefore filters) 80%-90% of the candidates, with an accuracy greater than 98%, leaving only the 10%-20% most promising candidates to be reviewed by humans.
Rashid, Nasir; Iqbal, Javaid; Javed, Amna; Tiwana, Mohsin I; Khan, Umar Shahbaz
2018-01-01
Brain Computer Interface (BCI) determines the intent of the user from a variety of electrophysiological signals. These signals, Slow Cortical Potentials, are recorded from scalp, and cortical neuronal activity is recorded by implanted electrodes. This paper is focused on design of an embedded system that is used to control the finger movements of an upper limb prosthesis using Electroencephalogram (EEG) signals. This is a follow-up of our previous research which explored the best method to classify three movements of fingers (thumb movement, index finger movement, and first movement). Two-stage logistic regression classifier exhibited the highest classification accuracy while Power Spectral Density (PSD) was used as a feature of the filtered signal. The EEG signal data set was recorded using a 14-channel electrode headset (a noninvasive BCI system) from right-handed, neurologically intact volunteers. Mu (commonly known as alpha waves) and Beta Rhythms (8-30 Hz) containing most of the movement data were retained through filtering using "Arduino Uno" microcontroller followed by 2-stage logistic regression to obtain a mean classification accuracy of 70%.
A Bio Medical Waste Identification and Classification Algorithm Using Mltrp and Rvm.
Achuthan, Aravindan; Ayyallu Madangopal, Vasumathi
2016-10-01
We aimed to extract the histogram features for text analysis and, to classify the types of Bio Medical Waste (BMW) for garbage disposal and management. The given BMW was preprocessed by using the median filtering technique that efficiently reduced the noise in the image. After that, the histogram features of the filtered image were extracted with the help of proposed Modified Local Tetra Pattern (MLTrP) technique. Finally, the Relevance Vector Machine (RVM) was used to classify the BMW into human body parts, plastics, cotton and liquids. The BMW image was collected from the garbage image dataset for analysis. The performance of the proposed BMW identification and classification system was evaluated in terms of sensitivity, specificity, classification rate and accuracy with the help of MATLAB. When compared to the existing techniques, the proposed techniques provided the better results. This work proposes a new texture analysis and classification technique for BMW management and disposal. It can be used in many real time applications such as hospital and healthcare management systems for proper BMW disposal.
Kim, Seongjung; Kim, Jongman; Ahn, Soonjae; Kim, Youngho
2018-04-18
Deaf people use sign or finger languages for communication, but these methods of communication are very specialized. For this reason, the deaf can suffer from social inequalities and financial losses due to their communication restrictions. In this study, we developed a finger language recognition algorithm based on an ensemble artificial neural network (E-ANN) using an armband system with 8-channel electromyography (EMG) sensors. The developed algorithm was composed of signal acquisition, filtering, segmentation, feature extraction and an E-ANN based classifier that was evaluated with the Korean finger language (14 consonants, 17 vowels and 7 numbers) in 17 subjects. E-ANN was categorized according to the number of classifiers (1 to 10) and size of training data (50 to 1500). The accuracy of the E-ANN-based classifier was obtained by 5-fold cross validation and compared with an artificial neural network (ANN)-based classifier. As the number of classifiers (1 to 8) and size of training data (50 to 300) increased, the average accuracy of the E-ANN-based classifier increased and the standard deviation decreased. The optimal E-ANN was composed with eight classifiers and 300 size of training data, and the accuracy of the E-ANN was significantly higher than that of the general ANN.
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Retrieval characteristics of the Bard Denali and Argon Option inferior vena cava filters.
Dowell, Joshua D; Semaan, Dominic; Makary, Mina S; Ryu, John; Khayat, Mamdouh; Pan, Xueliang
2017-11-01
The purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design. A single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine "snare and sheath" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented. In our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease. Option IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
OCT image segmentation of the prostate nerves
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Weldon, Thomas P.; Fried, Nathaniel M.
2009-08-01
The cavernous nerves course along the surface of the prostate and are responsible for erectile function. Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery may improve nerve preservation and postoperative sexual potency. In this study, 2-D OCT images of the rat prostate were segmented to differentiate the cavernous nerves from the prostate gland. Three image features were employed: Gabor filter, Daubechies wavelet, and Laws filter. The features were segmented using a nearestneighbor classifier. N-ary morphological post-processing was used to remove small voids. The cavernous nerves were differentiated from the prostate gland with a segmentation error rate of only 0.058 +/- 0.019.
Pattern recognition invariant under changes of scale and orientation
NASA Astrophysics Data System (ADS)
Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain
1997-08-01
We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.
The First Pan-Starrs Medium Deep Field Variable Star Catalog
NASA Astrophysics Data System (ADS)
Flewelling, Heather
2013-01-01
We present the first Pan-Starrs 1 Medium Deep Field Variable Star Catalog (PS1-MDF-VSC). The Pan-Starrs 1 (PS1) telescope is a 1.8 meter survey telescope with a 1.4 Gigapixel camera, and is located in Haleakala, Hawaii. The Medium Deep survey, which consists of 10 fields located uniformly across the sky, totalling 70 square degrees, is observed each night, in 2-3 filters per field, with 8 exposures per filter. We have located and classified several hundred periodic variable stars within the Medium Deep fields, and we present the first catalog listing the properties of these variable stars.
Testing Saliency Parameters for Automatic Target Recognition
NASA Technical Reports Server (NTRS)
Pandya, Sagar
2012-01-01
A bottom-up visual attention model (the saliency model) is tested to enhance the performance of Automated Target Recognition (ATR). JPL has developed an ATR system that identifies regions of interest (ROI) using a trained OT-MACH filter, and then classifies potential targets as true- or false-positives using machine-learning techniques. In this project, saliency is used as a pre-processing step to reduce the space for performing OT-MACH filtering. Saliency parameters, such as output level and orientation weight, are tuned to detect known target features. Preliminary results are promising and future work entails a rigrous and parameter-based search to gain maximum insight about this method.
Study of the photostability of 18 sunscreens in creams by measuring the SPF in vitro.
Couteau, Céline; Faure, Aurélie; Fortin, June; Paparis, Eva; Coiffard, Laurence J M
2007-05-09
The target of this research was to evaluate the photostability of various sunscreen agents incorporated into an O/W emulsion. The concept of photostability is very important in the field of solar protection. The effectiveness of the anti-solar products is quantified using a universal indicator: the sun protection factor (SPF). This number which can be found on packaging can be given in two different ways: by methods in vivo (Colipa method) and in vitro. It is this last method which was adopted for this study. According to selected filter UVB (currently directive 76/768/EEC modified authorized 18 filters UVB), we can obtain more or less effective creams. We chose the irradiation of sun lotions formulated using the authorized filters, used with their maximum amount of employment, in a Suntest, with an irradiance of 650 W/m(2) throughout variable time. With interval of regular time, one carries out a measurement of SPF in order to establish for each filter the kinetics SPF=f(time). An indicator of stability (t(90)) is then given. In this way, we could classify the filters by order of increasing photostability.
Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information
NASA Astrophysics Data System (ADS)
Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.
2017-09-01
Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
Foreign object detection via texture recognition and a neural classifier
NASA Astrophysics Data System (ADS)
Patel, Devesh; Hannah, I.; Davies, E. R.
1993-10-01
It is rate to find pieces of stone, wood, metal, or glass in food packets, but when they occur, these foreign objects (FOs) cause distress to the consumer and concern to the manufacturer. Using x-ray imaging to detect FOs within food bags, hard contaminants such as stone or metal appear darker, whereas soft contaminants such as wood or rubber appear slightly lighter than the food substrate. In this paper we concentrate on the detection of soft contaminants such as small pieces of wood in bags of frozen corn kernels. Convolution masks are used to generate textural features which are then classified into corresponding homogeneous regions on the image using an artificial neural network (ANN) classifier. The separate ANN outputs are combined using a majority operator, and region discrepancies are removed by a median filter. Comparisons with classical classifiers showed the ANN approach to have the best overall combination of characteristics for our particular problem. The detected boundaries are in good agreement with the visually perceived segmentations.
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Frequency domain analysis of errors in cross-correlations of ambient seismic noise
NASA Astrophysics Data System (ADS)
Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri
2016-12-01
We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.
F3D Image Processing and Analysis for Many - and Multi-core Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less
NASA Astrophysics Data System (ADS)
Pande-Chhetri, Roshan
High resolution hyperspectral imagery (airborne or ground-based) is gaining momentum as a useful analytical tool in various fields including agriculture and aquatic systems. These images are often contaminated with stripes and noise resulting in lower signal-to-noise ratio, especially in aquatic regions where signal is naturally low. This research investigates effective methods for filtering high spatial resolution hyperspectral imagery and use of the imagery in water quality parameter estimation and aquatic vegetation classification. The striping pattern of the hyperspectral imagery is non-parametric and difficult to filter. In this research, a de-striping algorithm based on wavelet analysis and adaptive Fourier domain normalization was examined. The result of this algorithm was found superior to other available algorithms and yielded highest Peak Signal to Noise Ratio improvement. The algorithm was implemented on individual image bands and on selected bands of the Maximum Noise Fraction (MNF) transformed images. The results showed that image filtering in the MNF domain was efficient and produced best results. The study investigated methods of analyzing hyperspectral imagery to estimate water quality parameters and to map aquatic vegetation in case-2 waters. Ground-based hyperspectral imagery was analyzed to determine chlorophyll-a (Chl-a) concentrations in aquaculture ponds. Two-band and three-band indices were implemented and the effect of using submerged reflectance targets was evaluated. Laboratory measured values were found to be in strong correlation with two-band and three-band spectral indices computed from the hyperspectral image. Coefficients of determination (R2) values were found to be 0.833 and 0.862 without submerged targets and stronger values of 0.975 and 0.982 were obtained using submerged targets. Airborne hyperspectral images were used to detect and classify aquatic vegetation in a black river estuarine system. Image normalization for water surface reflectance and water depths was conducted and non-parametric classifiers such as ANN, SVM and SAM were tested and compared. Quality assessment indicated better classification and detection when non-parametric classifiers were applied to normalized or depth invariant transform images. Best classification accuracy of 73% was achieved when ANN is applied on normalized image and best detection accuracy of around 92% was obtained when SVM or SAM was applied on depth invariant images.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.
Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael
2015-01-01
Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.
3D Gabor wavelet based vessel filtering of photoacoustic images.
Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi
2016-08-01
Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.
Litwin, Robert J; Huang, Steven Y; Sabir, Sharjeel H; Hoang, Quoc B; Ahrar, Kamran; Ahrar, Judy; Tam, Alda L; Mahvash, Armeen; Ensor, Joe E; Kroll, Michael; Gupta, Sanjay
2017-09-01
Our primary purpose was to assess the impact of an inferior vena cava filter retrieval algorithm in a cancer population. Because cancer patients are at persistently elevated risk for development of venous thromboembolism (VTE), our secondary purpose was to assess the incidence of recurrent VTE in patients who underwent filter retrieval. Patients with malignant disease who had retrievable filters placed at a tertiary care cancer hospital from August 2010 to July 2014 were retrospectively studied. A filter retrieval algorithm was established in August 2012. Patients and referring physicians were contacted in the postintervention period when review of the medical record indicated that filter retrieval was clinically appropriate. Patients were classified into preintervention (August 2010-July 2012) and postintervention (August 2012-July 2014) study cohorts. Retrieval rates and clinical pathologic records were reviewed. Filter retrieval was attempted in 34 (17.4%) of 195 patients in the preintervention cohort and 66 (32.8%) of 201 patients in the postintervention cohort (P < .01). The median time to filter retrieval in the preintervention and postintervention cohorts was 60 days (range, 20-428 days) and 107 days (range, 9-600 days), respectively (P = .16). In the preintervention cohort, 49 of 195 (25.1%) patients were lost to follow-up compared with 24 of 201 (11.9%) patients in the postintervention cohort (P < .01). Survival was calculated from the date of filter placement to death, when available. The overall survival for patients whose filters were retrieved was longer compared with the overall survival for patients whose filters were not retrieved (P < .0001). Of the 80 patients who underwent successful filter retrieval, two patients (2.5%) suffered from recurrent VTE (n = 1 nonfatal pulmonary embolism; n = 1 deep venous thrombosis). Both patients were treated with anticoagulation without filter replacement. Inferior vena cava filter retrieval rates can be significantly increased in patients with malignant disease with a low rate (2.5%) of recurrent VTE after filter retrieval. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Detecting natural occlusion boundaries using local cues
DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.
2012-01-01
Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731
Complications and Retrieval Data of Vena Cava Filters Based on Specific Infrarenal Location.
Tullius, Thomas G; Bos, Aaron S; Patel, Mikin V; Funaki, Brian; Van Ha, Thuong G
2018-02-01
Although recommended placement of IVC filters is with their tips positioned at the level of the renal vein inflow, in practice, adherence is limited due to clinical situation or IVC anatomy. We seek to evaluate the indwelling and retrieval complications of IVC filters based on their specific position within the infrarenal IVC. Retrospective, single institution study of 333 consecutive infrarenal vena cava filters placed by interventional radiologists in patients with an average age of 62.2 ± 15.7 years was performed between 2013 and 2015. Primary indication was venous thromboembolic disease (n = 320, 96.1%). Filters were classified based on location of the apex below the lowest renal vein inflow on the procedural venogram: less than 1 cm (n = 180, 54.1%), 1-2 cm (n = 96, 28.8%), and greater than 2 cm (n = 57, 17.1%). Denali (n = 171, 51.4%) and Celect (n = 162, 48.6%) filters were evaluated. CT follow-up, indwelling complications, and retrieval data were obtained. Follow-up CT imaging performed for symptomatic indications occurred for 38.3% of filters placed < 1 cm below the lowest renal vein, 27.1% of filters placed 1-2 cm, and 36.8% placed > 2 cm (p = .16). There was no difference in caval strut penetration, penetration of adjacent viscera, time to penetration, filter migration, or tilt (p = .15, .27, .41, .57, .93). No filter fractures occurred. There was no difference in the incidence of breakthrough PE or complex filter retrieval (p = .83, .59). Only one retrieval failure occurred. This study suggests filter apex location within the infrarenal IVC, including placement > 2 cm below the level of the renal vein inflow, is not associated with differences in indwelling or retrieval complications. Level 3 non-randomized controlled follow-up study.
A stackable, two-chambered, paper-based microbial fuel cell.
Fraiwan, Arwa; Choi, Seokheun
2016-09-15
We developed a stackable and integrable paper-based microbial fuel cell (MFC) for potentially powering on-chip paper-based devices. Four MFCs were prepared on a T-shaped filter paper which was eventually folded three times to connect these MFCs in series. Each MFC was fabricated by sandwiching multifunctional paper layers for two-chambered fuel cell configuration. One drop of bacteria-containing anolyte into the anodic inlet and another drop of potassium ferricyanide for cathodic reaction flowed through patterned fluidic pathways within the paper matrix, both vertically and horizontally, reaching each of the four MFCs and filling the reservoir of each device. Bacterial respiration then transferred electrons to the anode, which traveled across an external load to the cathode where they combined with protons. The MFC stack connected in series generated a high power density (1.2μW/cm(2)), which is two orders of magnitude higher than the previous report on the paper-based MFC stack. This work will represent the fusion of the art of origami and paper-based MFC technology, which could provide a paradigm shift for the architecture and design of paper-based batteries. Copyright © 2016 Elsevier B.V. All rights reserved.
Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics
NASA Astrophysics Data System (ADS)
Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.
2018-03-01
Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.
Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics
NASA Astrophysics Data System (ADS)
Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.
2018-06-01
Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.
Microseismic Event Location Improvement Using Adaptive Filtering for Noise Attenuation
NASA Astrophysics Data System (ADS)
de Santana, F. L., Sr.; do Nascimento, A. F.; Leandro, W. P. D. N., Sr.; de Carvalho, B. M., Sr.
2017-12-01
In this work we show how adaptive filtering noise suppression improves the effectiveness of the Source Scanning Algorithm (SSA; Kao & Shan, 2004) in microseism location in the context of fracking operations. The SSA discretizes the time and region of interest in a 4D vector and, for each grid point and origin time, a brigthness value (seismogram stacking) is calculated. For a given set of velocity model parameters, when origin time and hypocenter of the seismic event are correct, a maximum value for coherence (or brightness) is achieved. The result is displayed on brightness maps for each origin time. Location methods such as SSA are most effective when the noise present in the seismograms is incoherent, however, the method may present false positives when the noise present in the data is coherent as occurs in fracking operations. To remove from the seismograms, the coherent noise from the pump and engines used in the operation, we use an adaptive filter. As the noise reference, we use the seismogram recorded at the station closest to the machinery employed. Our methodology was tested on semi-synthetic data. The microseismic was represented by Ricker pulses (with central frequency of 30Hz) on synthetics seismograms, and to simulate real seismograms on a surface microseismic monitoring situation, we added real noise recorded in a fracking operation to these synthetics seismograms. The results show that after the filtering of the seismograms, we were able to improve our detection threshold and to achieve a better resolution on the brightness maps of the located events.
Ensemble candidate classification for the LOTAAS pulsar survey
NASA Astrophysics Data System (ADS)
Tan, C. M.; Lyon, R. J.; Stappers, B. W.; Cooper, S.; Hessels, J. W. T.; Kondratiev, V. I.; Michilli, D.; Sanidas, S.
2018-03-01
One of the biggest challenges arising from modern large-scale pulsar surveys is the number of candidates generated. Here, we implemented several improvements to the machine learning (ML) classifier previously used by the LOFAR Tied-Array All-Sky Survey (LOTAAS) to look for new pulsars via filtering the candidates obtained during periodicity searches. To assist the ML algorithm, we have introduced new features which capture the frequency and time evolution of the signal and improved the signal-to-noise calculation accounting for broad profiles. We enhanced the ML classifier by including a third class characterizing RFI instances, allowing candidates arising from RFI to be isolated, reducing the false positive return rate. We also introduced a new training data set used by the ML algorithm that includes a large sample of pulsars misclassified by the previous classifier. Lastly, we developed an ensemble classifier comprised of five different Decision Trees. Taken together these updates improve the pulsar recall rate by 2.5 per cent, while also improving the ability to identify pulsars with wide pulse profiles, often misclassified by the previous classifier. The new ensemble classifier is also able to reduce the percentage of false positive candidates identified from each LOTAAS pointing from 2.5 per cent (˜500 candidates) to 1.1 per cent (˜220 candidates).
Classifier for gravitational-wave inspiral signals in nonideal single-detector data
NASA Astrophysics Data System (ADS)
Kapadia, S. J.; Dent, T.; Dal Canton, T.
2017-11-01
We describe a multivariate classifier for candidate events in a templated search for gravitational-wave (GW) inspiral signals from neutron-star-black-hole (NS-BH) binaries, in data from ground-based detectors where sensitivity is limited by non-Gaussian noise transients. The standard signal-to-noise ratio (SNR) and chi-squared test for inspiral searches use only properties of a single matched filter at the time of an event; instead, we propose a classifier using features derived from a bank of inspiral templates around the time of each event, and also from a search using approximate sine-Gaussian templates. The classifier thus extracts additional information from strain data to discriminate inspiral signals from noise transients. We evaluate a random forest classifier on a set of single-detector events obtained from realistic simulated advanced LIGO data, using simulated NS-BH signals added to the data. The new classifier detects a factor of 1.5-2 more signals at low false positive rates as compared to the standard "reweighted SNR" statistic, and does not require the chi-squared test to be computed. Conversely, if only the SNR and chi-squared values of single-detector events are available, random forest classification performs nearly identically to the reweighted SNR.
Online particle detection with Neural Networks based on topological calorimetry information
NASA Astrophysics Data System (ADS)
Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.
2012-06-01
This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.
Folmsbee, Martha; Lentine, Kerry Roche; Wright, Christine; Haake, Gerhard; Mcburnie, Leesa; Ashtekar, Dilip; Beck, Brian; Hutchison, Nick; Okhio-Seaman, Laura; Potts, Barbara; Pawar, Vinayak; Windsor, Helena
2014-01-01
Mycoplasma are bacteria that can penetrate 0.2 and 0.22 μm rated sterilizing-grade filters and even some 0.1 μm rated filters. Primary applications for mycoplasma filtration include large scale mammalian and bacterial cell culture media and serum filtration. The Parenteral Drug Association recognized the absence of standard industry test parameters for testing and classifying 0.1 μm rated filters for mycoplasma clearance and formed a task force to formulate consensus test parameters. The task force established some test parameters by common agreement, based upon general industry practices, without the need for additional testing. However, the culture medium and incubation conditions, for generating test mycoplasma cells, varied from filter company to filter company and was recognized as a serious gap by the task force. Standardization of the culture medium and incubation conditions required collaborative testing in both commercial filter company laboratories and in an Independent laboratory (Table I). The use of consensus test parameters will facilitate the ultimate cross-industry goal of standardization of 0.1 μm filter claims for mycoplasma clearance. However, it is still important to recognize filter performance will depend on the actual conditions of use. Therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. Mycoplasma are small bacteria that have the ability to penetrate sterilizing-grade filters. Filtration of large-scale mammalian and bacterial cell culture media is an example of an industry process where effective filtration of mycoplasma is required. The Parenteral Drug Association recognized the absence of industry standard test parameters for evaluating mycoplasma clearance filters by filter manufacturers and formed a task force to formulate such a consensus among manufacturers. The use of standardized test parameters by filter manufacturers, including the preparation of the culture broth, will facilitate the end user's evaluation of the mycoplasma clearance claims provided by filter vendors. However, it is still important to recognize filter performance will depend on the actual conditions of use; therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. © PDA, Inc. 2014.
The Universe Going Green: Extraordinarily Strong [OIII]5007 in Typical Dwarf Galaxies at z~3
NASA Astrophysics Data System (ADS)
Malkan, Matthew Arnold; Cohen, Daniel
2017-01-01
We constructed the average SEDs of U-dropout galaxies in the Subaru Deep Field. This sample contains more than 5000 Lyman-break galaxies at z~3. Their average near- and mid-IR colors were obtained by stacking JHK and IRAC imaging, in bins of stellar mass. At the lowest mass bins an increasingly strong excess flux is seen in the K filter. This excess can reach 1 magnitude in the broadband filter, and we attribute it to strong \\OIII $\\lambda{5007}$ line emission. The equivalent width is extraordinarily high, reaching almost 1000\\Ang\\ for the average z=3 galaxy at an i magnitude of 27. Such extreme [OIII] emission is very rare in the current epoch, only seen in a handful of metal-deficient dwarf starbursts sometimes referred to as ''Green Peas". In contrast, extreme [OIII]--strong enough to dominate the entire broad-band SED--was evidently the norm for faint galaxies at high redshift. We present evidence that these small but numerous galaxies were primarily responsible for the reionization of the Universe.
Violet and blue light-induced green fluorescence emissions from dental caries.
Shakibaie, F; Walsh, L J
2016-12-01
The objective of this laboratory study was to compare violet and visible blue LED light-elicited green fluorescence emissions from enamel and dentine in healthy or carious states. Microscopic digital photography was undertaken using violet and blue LED illumination (405 nm and 455 nm wavelengths) of tooth surfaces, which were photographed through a custom-made stack of green compensating filters which removed the excitation light and allowed green fluorescence emissions to pass. Green channel pixel data were analysed. Dry sound enamel and sound root surfaces showed strong green fluorescence when excited by violet or blue lights. Regions of cavitated dental caries gave lower green fluorescence, and this was similar whether the dentine in the lesions was the same colour as normal dentine or was darkly coloured. The presence of saliva on the surface did not significantly change the green fluorescence, while the presence of blood diluted in saliva depressed green fluorescence. Using violet or blue illumination in combination with green compensating filters could potentially aid in the assessment of areas of mineral loss. © 2016 Australian Dental Association.
Planar waveguide integrated spatial filter array
NASA Astrophysics Data System (ADS)
Ai, Jun; Dimov, Fedor; Lyon, Richard; Rakuljic, Neven; Griffo, Chris; Xia, Xiaowei; Arik, Engin
2013-09-01
An innovative integrated spatial filter array (iSFA) was developed for the nulling interferometer for the detection of earth-like planets and life beyond our solar system. The coherent iSFA comprised a 2D planar lightwave circuit (PLC) array coupled with a pair of 2D lenslet arrays in a hexagonal grid to achieve the optimum fill factor and throughput. The silica-on-silicon waveguide mode field diameter and numerical aperture (NA) were designed to match with the Airy disc and NA of the microlens for optimum coupling. The lenslet array was coated with a chromium pinhole array at the focal plane to pass the single-mode waveguide but attenuate the higher modes. We assembled a 32 by 30 array by stacking 32 chips that were produced by photolithography from a 6-in. silicon wafer. Each chip has 30 planar waveguides. The PLC array is inherently polarization-maintaining (PM) and requires much less alignment in contrast to a fiber array, where each PM fiber must be placed individually and oriented correctly. The PLC array offers better scalability than the fiber bundle array for large arrays of over 1,000 waveguides.
Further Constraints and Uncertainties on the Deep Seismic Structure of the Moon
NASA Technical Reports Server (NTRS)
Lin, Pei-Ying Patty; Weber, Renee C.; Garnero, Ed J.; Schmerr, Nicholas C.
2011-01-01
The Apollo Passive Seismic Experiment (APSE) consisted of four 3-component seismometers deployed between 1969 and 1972, that continuously recorded lunar ground motion until late 1977. The APSE data provide a unique opportunity for investigating the interior of a planet other than Earth, generating the most direct constraints on the elastic structure, and hence the thermal and compositional evolution of the Moon. Owing to the lack of far side moonquakes, past seismic models of the lunar interior were unable to constrain the lowermost 500 km of the interior. Recently, array methodologies aimed at detecting deep lunar seismic reflections found evidence for a lunar core, providing an elastic model of the deepest lunar interior consistent with geodetic parameters. Here we study the uncertainties in these models associated with the double array stacking of deep moonquakes for imaging deep reflectors in the Moon. We investigate the dependency of the array stacking results on a suite of parameters, including amplitude normalization assumptions, polarization filters, assumed velocity structure, and seismic phases that interfere with our desired target phases. These efforts are facilitated by the generation of synthetic seismograms at high frequencies (approx. 1Hz), allowing us to directly study the trade-offs between different parameters. We also investigate expected amplitudes of deep reflections relative to direct P and S arrivals, including predictions from arbitrarily oriented focal mechanisms in our synthetics. Results from separate versus combined station stacking help to establish the robustness of stacks. Synthetics for every path geometry of data were processed identically to that done with data. Different experiments were aimed at examining various processing assumptions, such as adding random noise to synthetics and mixing 3 components to some degree. The principal stacked energy peaks put forth in recent work persist, but their amplitude (which maps into reflector impedance contrast) and timing (which maps into reflector depth) depend on factors that are not well constrained -- most notably, the velocity structure of the overlying lunar interior. Thus, while evidence for the lunar core remains strong, the depths of imaged reflectors have associated uncertainties that will require new seismic data and observations to constrain. These results strongly advocate further investigations on the Moon to better resolve the interior (e.g., Selene missions), for the Moon apparently has a rich history of construction and evolution that is inextricably tied to that of Earth.
Andrew Lister; Rachel Riemann; Tonya Lister; Will McWilliams
2005-01-01
Forest fragmentation is thought to impact many biotic and abiotic processes important to ecosystem function. We assessed forest fragmentation in 13 Northeastern States to gain a greater understanding of the trends in and status of this region?s forests. We reclassified and then statistically filtered and updated classified Landsat imagery from the early 1990s, and...
Application of deep learning to the classification of images from colposcopy.
Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige
2018-03-01
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Ambert, Kyle H; Cohen, Aaron M
2009-01-01
OBJECTIVE Free-text clinical reports serve as an important part of patient care management and clinical documentation of patient disease and treatment status. Free-text notes are commonplace in medical practice, but remain an under-used source of information for clinical and epidemiological research, as well as personalized medicine. The authors explore the challenges associated with automatically extracting information from clinical reports using their submission to the Integrating Informatics with Biology and the Bedside (i2b2) 2008 Natural Language Processing Obesity Challenge Task. DESIGN A text mining system for classifying patient comorbidity status, based on the information contained in clinical reports. The approach of the authors incorporates a variety of automated techniques, including hot-spot filtering, negated concept identification, zero-vector filtering, weighting by inverse class-frequency, and error-correcting of output codes with linear support vector machines. MEASUREMENTS Performance was evaluated in terms of the macroaveraged F1 measure. RESULTS The automated system performed well against manual expert rule-based systems, finishing fifth in the Challenge's intuitive task, and 13(th) in the textual task. CONCLUSIONS The system demonstrates that effective comorbidity status classification by an automated system is possible.
NASA Astrophysics Data System (ADS)
Elbakary, M. I.; Alam, M. S.; Aslan, M. S.
2008-03-01
In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.
He, Jian; Bai, Shuang; Wang, Xiaoyi
2017-06-16
Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.
Application of deep learning to the classification of images from colposcopy
Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige
2018-01-01
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images. PMID:29456725
Morphological classification of bioaerosols from composting using scanning electron microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamer Vestlund, A.; FIRA International Ltd., Maxwell Road, Stevenage, Herts SG1 2EW; Al-Ashaab, R.
2014-07-15
Highlights: • Bioaerosols were captured using the filter method. • Bioaerosols were analysed using scanning electron microscope. • Bioaerosols were classified on the basis of morphology. • Single small cells were found more frequently than aggregates and larger cells. • Smaller cells may disperse further than heavier aggregate structures. - Abstract: This research classifies the physical morphology (form and structure) of bioaerosols emitted from open windrow composting. Aggregation state, shape and size of the particles captured are reported alongside the implications for bioaerosol dispersal after release. Bioaerosol sampling took place at a composting facility using personal air filter samplers. Samplesmore » were analysed using scanning electron microscopy. Particles were released mainly as small (<1 μm) single, spherical cells, followed by larger (>1 μm) single cells, with aggregates occurring in smaller proportions. Most aggregates consisted of clusters of 2–3 particles as opposed to chains, and were <10 μm in size. No cells were attached to soil debris or wood particles. These small single cells or small aggregates are more likely to disperse further downwind from source, and cell viability may be reduced due to increased exposure to environmental factors.« less
Javed, Amna; Tiwana, Mohsin I.; Khan, Umar Shahbaz
2018-01-01
Brain Computer Interface (BCI) determines the intent of the user from a variety of electrophysiological signals. These signals, Slow Cortical Potentials, are recorded from scalp, and cortical neuronal activity is recorded by implanted electrodes. This paper is focused on design of an embedded system that is used to control the finger movements of an upper limb prosthesis using Electroencephalogram (EEG) signals. This is a follow-up of our previous research which explored the best method to classify three movements of fingers (thumb movement, index finger movement, and first movement). Two-stage logistic regression classifier exhibited the highest classification accuracy while Power Spectral Density (PSD) was used as a feature of the filtered signal. The EEG signal data set was recorded using a 14-channel electrode headset (a noninvasive BCI system) from right-handed, neurologically intact volunteers. Mu (commonly known as alpha waves) and Beta Rhythms (8–30 Hz) containing most of the movement data were retained through filtering using “Arduino Uno” microcontroller followed by 2-stage logistic regression to obtain a mean classification accuracy of 70%. PMID:29888252
Estimating air chemical emissions from research activities using stack measurement data.
Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V
2013-03-01
Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.
NASA Astrophysics Data System (ADS)
Matsumoto, Atsushi; Matsushita, Asuka; Takei, Yuki; Akahane, Kouichi; Matsushima, Yuichi; Ishikawa, Hiroshi; Utaka, Katsuyuki
2014-09-01
In this study, we investigated quantum dot intermixing (QDI) for InAs/InGaAlAs highly stacked QDs on an InP(311)B substrate with low-temperature annealing at 650 °C in order to realize integrated photonic devices with QDs and passive waveguides. In particular, we adopted the method of introducing point defects by ICP-RIE to realize a blue shift of the PL peak wavelength by about 150 nm. Moreover, we successfully fabricated double micro-ring resonators by QDI. The output power contrasts of the devices were found to be 9.0 and 8.6 dB for TE and TM modes, respectively.
Applications of multi-spectral imaging: failsafe industrial flame detector
NASA Astrophysics Data System (ADS)
Wing Au, Kwong; Larsen, Christopher; Cole, Barry; Venkatesha, Sharath
2016-05-01
Industrial and petrochemical facilities present unique challenges for fire protection and safety. Typical scenarios include detection of an unintended fire in a scene, wherein the scene also includes a flare stack in the background. Maintaining a high level of process and plant safety is a critical concern. In this paper, we present a failsafe industrial flame detector which has significant performance benefits compared to current flame detectors. The design involves use of microbolometer in the MWIR and LWIR spectrum and a dual band filter. This novel flame detector can help industrial facilities to meet their plant safety and critical infrastructure protection requirements while ensuring operational and business readiness at project start-up.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
Trace elements by instrumental neutron activation analysis for pollution monitoring
NASA Technical Reports Server (NTRS)
Sheibley, D. W.
1975-01-01
Methods and technology were developed to analyze 1000 samples/yr of coal and other pollution-related samples. The complete trace element analysis of 20-24 samples/wk averaged 3-3.5 man-hours/sample. The computerized data reduction scheme could identify and report data on as many as 56 elements. In addition to coal, samples of fly ash, bottom ash, crude oil, fuel oil, residual oil, gasoline, jet fuel, kerosene, filtered air particulates, ore, stack scrubber water, clam tissue, crab shells, river sediment and water, and corn were analyzed. Precision of the method was plus or minus 25% based on all elements reported in coal and other sample matrices. Overall accuracy was estimated at 50%.
A gradient-boosting approach for filtering de novo mutations in parent-offspring trios.
Liu, Yongzhuang; Li, Bingshan; Tan, Renjie; Zhu, Xiaolin; Wang, Yadong
2014-07-01
Whole-genome and -exome sequencing on parent-offspring trios is a powerful approach to identifying disease-associated genes by detecting de novo mutations in patients. Accurate detection of de novo mutations from sequencing data is a critical step in trio-based genetic studies. Existing bioinformatic approaches usually yield high error rates due to sequencing artifacts and alignment issues, which may either miss true de novo mutations or call too many false ones, making downstream validation and analysis difficult. In particular, current approaches have much worse specificity than sensitivity, and developing effective filters to discriminate genuine from spurious de novo mutations remains an unsolved challenge. In this article, we curated 59 sequence features in whole genome and exome alignment context which are considered to be relevant to discriminating true de novo mutations from artifacts, and then employed a machine-learning approach to classify candidates as true or false de novo mutations. Specifically, we built a classifier, named De Novo Mutation Filter (DNMFilter), using gradient boosting as the classification algorithm. We built the training set using experimentally validated true and false de novo mutations as well as collected false de novo mutations from an in-house large-scale exome-sequencing project. We evaluated DNMFilter's theoretical performance and investigated relative importance of different sequence features on the classification accuracy. Finally, we applied DNMFilter on our in-house whole exome trios and one CEU trio from the 1000 Genomes Project and found that DNMFilter could be coupled with commonly used de novo mutation detection approaches as an effective filtering approach to significantly reduce false discovery rate without sacrificing sensitivity. The software DNMFilter implemented using a combination of Java and R is freely available from the website at http://humangenome.duke.edu/software. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy
2017-01-01
Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier. PMID:28124985
Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy
2017-01-23
Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier.
Crop classification using temporal stacks of multispectral satellite imagery
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Brumby, Steven P.; Chartrand, Rick; Keisler, Ryan; Longbotham, Nathan; Mertes, Carly; Skillman, Samuel W.; Warren, Michael S.
2017-05-01
The increase in performance, availability, and coverage of multispectral satellite sensor constellations has led to a drastic increase in data volume and data rate. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. The data analysis capability, however, has lagged behind storage and compute developments, and has traditionally focused on individual scene processing. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and can scale with the high-rate and dimensionality of imagery being collected. We investigate and compare the performance of pixel-level crop identification using tree-based classifiers and its dependence on both temporal and spectral features. Classification performance is assessed using as ground-truth Cropland Data Layer (CDL) crop masks generated by the US Department of Agriculture (USDA). The CDL maps contain 30m spatial resolution, pixel-level labels for around 200 categories of land cover, but are however only available post-growing season. The analysis focuses on McCook county in South Dakota and shows crop classification using a temporal stack of Landsat 8 (L8) imagery over the growing season, from April through October. Specifically, we consider the temporal L8 stack depth, as well as different normalized band difference indices, and evaluate their contribution to crop identification. We also show an extension of our algorithm to map corn and soy crops in the state of Mato Grosso, Brazil.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
Kwon, Se Hwan; Park, So Hyun; Oh, Joo Hyeong; Song, Myung Gyu; Seo, Tae-Seok
2016-05-01
To evaluate the effect of an inferior vena cava (IVC) filter during aspiration thrombectomy for acute deep vein thrombosis (DVT) in the lower extremity. From July 2004 to December 2013, a retrospective analysis of 106 patients with acute DVT was performed. All patients received an IVC filter and were treated initially with aspiration thrombectomy. Among the 106 patients, DVT extension into the IVC was noted in 27 but was not evident in 79. We evaluated the presence of trapped thrombi in the filters after the procedure. The sizes of the trapped thrombi were classified into 2 grades based on the ratio of the maximum transverse length of the trapped thrombus to the diameter of the IVC (Grades I [≤ 50%] and II [> 50%]). A trapped thrombus in the filter was detected in 46 (43%) of 106 patients on final venograms. The sizes of the trapped thrombi were grade I in 12 (26.1%) patients and grade II in 34 (73.9%). Among the 27 patients with DVT extension into the IVC, 20 (74.1%) showed a trapped thrombus in the filter, 75% (15 of 20) of which were grade II. Among the 79 patients without DVT extension into the IVC, 26 (32.9%) showed a trapped thrombus in the IVC filter, 73% (19 of 26) of which were grade II. Thrombus migration occurred frequently during aspiration thrombectomy of patients with acute DVT in the lower extremity. However, further studies are needed to establish a standard protocol for the prophylactic placement of an IVC filter during aspiration thrombectomy. © The Author(s) 2016.
Zhang, Shaodian; Qiu, Lin; Chen, Frank; Zhang, Weinan; Yu, Yong; Elhadad, Noémie
2017-01-01
Patients discuss complementary and alternative medicine (CAM) in online health communities. Sometimes, patients’ conflicting opinions toward CAM-related issues trigger debates in the community. The objectives of this paper are to identify such debates, identify controversial CAM therapies in a popular online breast cancer community, as well as patients’ stances towards them. To scale our analysis, we trained a set of classifiers. We first constructed a supervised classifier based on a long short-term memory neural network (LSTM) stacked over a convolutional neural network (CNN) to detect automatically CAM-related debates from a popular breast cancer forum. Members’ stances in these debates were also identified by a CNN-based classifier. Finally, posts automatically flagged as debates by the classifier were analyzed to explore which specific CAM therapies trigger debates more often than others. Our methods are able to detect CAM debates with F score of 77%, and identify stances with F score of 70%. The debate classifier identified about 1/6 of all CAM-related posts as debate. About 60% of CAM-related debate posts represent the supportive stance toward CAM usage. Qualitative analysis shows that some specific therapies, such as Gerson therapy and usage of laetrile, trigger debates frequently among members of the breast cancer community. This study demonstrates that neural networks can effectively locate debates on usage and effectiveness of controversial CAM therapies, and can help make sense of patients’ opinions on such issues under dispute. As to CAM for breast cancer, perceptions of their effectiveness vary among patients. Many of the specific therapies trigger debates frequently and are worth more exploration in future work. PMID:28967000
Zhang, Shaodian; Qiu, Lin; Chen, Frank; Zhang, Weinan; Yu, Yong; Elhadad, Noémie
2017-04-01
Patients discuss complementary and alternative medicine (CAM) in online health communities. Sometimes, patients' conflicting opinions toward CAM-related issues trigger debates in the community. The objectives of this paper are to identify such debates, identify controversial CAM therapies in a popular online breast cancer community, as well as patients' stances towards them. To scale our analysis, we trained a set of classifiers. We first constructed a supervised classifier based on a long short-term memory neural network (LSTM) stacked over a convolutional neural network (CNN) to detect automatically CAM-related debates from a popular breast cancer forum. Members' stances in these debates were also identified by a CNN-based classifier. Finally, posts automatically flagged as debates by the classifier were analyzed to explore which specific CAM therapies trigger debates more often than others. Our methods are able to detect CAM debates with F score of 77%, and identify stances with F score of 70%. The debate classifier identified about 1/6 of all CAM-related posts as debate. About 60% of CAM-related debate posts represent the supportive stance toward CAM usage. Qualitative analysis shows that some specific therapies, such as Gerson therapy and usage of laetrile, trigger debates frequently among members of the breast cancer community. This study demonstrates that neural networks can effectively locate debates on usage and effectiveness of controversial CAM therapies, and can help make sense of patients' opinions on such issues under dispute. As to CAM for breast cancer, perceptions of their effectiveness vary among patients. Many of the specific therapies trigger debates frequently and are worth more exploration in future work.
Feature selection for the classification of traced neurons.
López-Cabrera, José D; Lorenzo-Ginori, Juan V
2018-06-01
The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.
Researches of fruit quality prediction model based on near infrared spectrum
NASA Astrophysics Data System (ADS)
Shen, Yulin; Li, Lian
2018-04-01
With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain. PMID:27057543
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Shi, De-Zhi; Wu, Wei-Xiang; Lu, Sheng-Yong; Chen, Tong; Huang, Hui-Liang; Chen, Ying-Xu; Yan, Jian-Hua
2008-05-01
Municipal solid waste (MSW) source-classified collection represents a change in MSW management in China and other developing countries. Comparative experiments were performed to evaluate the effect of a newly established MSW source-classified collection system on the emission of PCDDs/Fs (polychlorinated dibenzo-p-dioxins and dibenzofurans) and heavy metals (HMs) from a full-scale incinerator in China. As a result of presorting and dewatering, the chlorine level, heavy metal and water content were lower, but heat value was higher in the source-classified MSW (classified MSW) as compared with the conventionally mixed collected MSW (mixed MSW). The generation of PCDDs/Fs in flue gas from the classified MSW incineration was 9.28 ng I-TEQ/Nm(3), only 69.4% of that from the mixed MSW incineration, and the final emission of PCDDs/Fs was only 0.12 ng I-TEQ/Nm(3), although activated carbon injection was reduced by 20%. The level of PCDDs/Fs in fly ash from the bag filter was 0.27 ng I-TEQ/g. These results indicated that the source-classified collection with pretreatment could improve the characteristics of MSW for incineration, and significantly decrease formation of PCDDs/Fs in MSW incineration. Furthermore, distributions of HMs such as Cd, Pb, Cu, Zn, Cr, As, Ni, Hg in bottom ash and fly ash were investigated to assess the need for treatment of residual ash.
Transient Deformation of Stable Continental Lithosphere by the 2011 M9.0 Tohoku-Oki Megatrust
NASA Astrophysics Data System (ADS)
Hong, T. K.; Chi, D.
2015-12-01
The Korean Peninsula was dislocated laterally by 1-6cm after the 11 March 2011 M9.0 Tohoku-Oki megathrust at a distance of ~1300 km. These lateral displacements produced apparent tensional stresses of 1-7 kPa in the crust of the peninsula, perturbing the medium. Temporal variation of seismic velocities is investigated to assess the lithospheric responses to the megatrust. The Green's function over inter-station paths are retrieved from ambient noises recorded at broadband seismic stations that are densely deployed over the peninsula. The ambient noises are bandpass-filtered between 0.03 and 0.08 Hz, and spectral whitening and one-bit normalization are applied. The fundamental-mode Rayleigh waves are retrieved by stacking the cross-correlation functions of 10-days-long ambient noises from 2010 to 2015. The traveltime changes of Rayleigh waves with respect to the reference traveltimes are calculated by comparing the stacked cross-correlation functions. The reference Rayleigh waves are calculated by stacking the cross-correlation functions for 4 to 6 months before the megathrust. The traveltime changes are normalized by the inter-station distances. Abrupt traveltime delays are observed right after the megathrust, which are particularly strong along paths subparallel to the great-circle direction to the megathrust. The peak traveltime delay reaches 0.028 s/km, which corresponds to shear velocity decrease of 8.9 %. The traveltime delays are weak along the paths deviated from the great-circle directions. The observation suggests that the transient tension stress field caused longitudinal lithospheric perturbation with preferential mineral orientation and fluid migration, decreasing the seismic velocities. The traveltime delays were recovered with rates of 0.000025 to 0.000059 s/km per day, completing the recovery in several hundred days after the megathrust.
Neves, A A; Silva, E J; Roter, J M; Belladona, F G; Alves, H D; Lopes, R T; Paciornik, S; De-Deus, G A
2015-11-01
To propose an automated image processing routine based on free software to quantify root canal preparation outcomes in pairs of sound and instrumented roots after micro-CT scanning procedures. Seven mesial roots of human mandibular molars with different canal configuration systems were studied: (i) Vertucci's type 1, (ii) Vertucci's type 2, (iii) two individual canals, (iv) Vertucci's type 6, canals (v) with and (vi) without debris, and (vii) canal with visible pulp calcification. All teeth were instrumented with the BioRaCe system and scanned in a Skyscan 1173 micro-CT before and after canal preparation. After reconstruction, the instrumented stack of images (IS) was registered against the preoperative sound stack of images (SS). Image processing included contrast equalization and noise filtering. Sound canal volumes were obtained by a minimum threshold. For the IS, a fixed conservative threshold was chosen as the best compromise between instrumented canal and dentine whilst avoiding debris, resulting in instrumented canal plus empty spaces. Arithmetic and logical operations between sound and instrumented stacks were used to identify debris. Noninstrumented dentine was calculated using a minimum threshold in the IS and subtracting from the SS and total debris. Removed dentine volume was obtained by subtracting SS from IS. Quantitative data on total debris present in the root canal space after instrumentation, noninstrumented areas and removed dentine volume were obtained for each test case, as well as three-dimensional volume renderings. After standardization of acquisition, reconstruction and image processing micro-CT images, a quantitative approach for calculation of root canal biomechanical outcomes was achieved using free software. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mininni, Giuseppe; Sbrilli, Andrea; Maria Braguglia, Camilla; Guerriero, Ettore; Marani, Dario; Rotatori, Mauro
An experimental campaign was carried out on a hospital and cemetery waste incineration plant in order to assess the emissions of polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and polycyclic aromatic hydrocarbons (PAHs). Raw gases were sampled in the afterburning chamber, using a specifically designed device, after the heat recovery section and at the stack. Samples of slags from the combustion chamber and fly ashes from the bag filter were also collected and analyzed. PCDD/Fs and PAHs concentrations in exhaust gas after the heat exchanger (200-350 °C) decreased in comparison with the values detected in the afterburning chamber. Pollutant mass balance regarding the heat exchanger did not confirm literature findings about the de novo synthesis of PCDD/Fs in the heat exchange process. In spite of a consistent reduction of PCDD/Fs in the flue gas treatment system (from 77% up to 98%), the limit of 0.1 ng ITEQ Nm -3 at the stack was not accomplished. PCDD/Fs emission factors for air spanned from 2.3 up to 44 μg ITEQ t -1 of burned waste, whereas those through solid residues (mainly fly ashes) were in the range 41-3700 μg ITEQ t -1. Tests run with cemetery wastes generally showed lower PCDD/F emission factors than those with hospital wastes. PAH total emission factors (91-414 μg kg -1 of burned waste) were in the range of values reported for incineration of municipal and industrial wastes. In spite of the observed release from the scrubber, carcinogenic PAHs concentrations at the stack (0.018-0.5 μg Nm -3) were below the Italian limit of 10 μg Nm -3.
NASA Technical Reports Server (NTRS)
Lyons, Suzanne; Sandwell, David
2003-01-01
Interferometric synthetic aperture radar (InSAR) provides a practical means of mapping creep along major strike-slip faults. The small amplitude of the creep signal (less than 10 mm/yr), combined with its short wavelength, makes it difficult to extract from long time span interferograms, especially in agricultural or heavily vegetated areas. We utilize two approaches to extract the fault creep signal from 37 ERS SAR images along the southem San Andreas Fault. First, amplitude stacking is utilized to identify permanent scatterers, which are then used to weight the interferogram prior to spatial filtering. This weighting improves correlation and also provides a mask for poorly correlated areas. Second, the unwrapped phase is stacked to reduce tropospheric and other short-wavelength noise. This combined processing enables us to recover the near-field (approximately 200 m) slip signal across the fault due to shallow creep. Displacement maps fiom 60 interferograms reveal a diffuse secular strain buildup, punctuated by localized interseismic creep of 4-6 mm/yr line of sight (LOS, 12-18 mm/yr horizontal). With the exception of Durmid Hill, this entire segment of the southern San Andreas experienced right-lateral triggered slip of up to 10 cm during the 3.5-year period spanning the 1992 Landers earthquake. The deformation change following the 1999 Hector Mine earthquake was much smaller (4 cm) and broader than for the Landers event. Profiles across the fault during the interseismic phase show peak-to-trough amplitude ranging from 15 to 25 mm/yr (horizontal component) and the minimum misfit models show a range of creeping/locking depth values that fit the data.
NASA Astrophysics Data System (ADS)
Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.
2016-02-01
Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
Sachs, Nicholas A; Ruiz-Torres, Ricardo; Perreault, Eric J; Miller, Lee E
2016-02-01
It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
[Clarity of flight information in the cockpit of the new aircraft generation].
Stern, C; Schwartz, R; Groenhoff, S; Draeger, J; Hüttig, G; Bernhard, H
1994-08-01
Fundamental changes of cockpit design in recent years, especially the transition from analogue to digital flight information systems and the use of colour-coded displays, lead to new demands on the visual system of the pilot. Twenty experienced pilots each participated in four 15-min sessions with a simulator program in the new Airbus 340 Simulator of the Technical University of Berlin. The pilots were confronted with various flight situations and events. The simulation program was carried out with visual acuity of 1.0 or better, with acuity reduced to 0.5 and with red and green filters. The time between the display of information and the pilot's reaction was determined. The probands were classified into two groups according to their age (< or = 45 years, > or = 45 years). In both age groups a significant difference was found only with green filters. There was no difference with reduced visual acuity or with red filters, and no differences were seen between the two age groups.
Plasmonic- and dielectric-based structural coloring: from fundamentals to practical applications
NASA Astrophysics Data System (ADS)
Lee, Taejun; Jang, Jaehyuck; Jeong, Heonyeong; Rho, Junsuk
2018-01-01
Structural coloring is production of color by surfaces that have microstructure fine enough to interfere with visible light; this phenomenon provides a novel paradigm for color printing. Plasmonic color is an emergent property of the interaction between light and metallic surfaces. This phenomenon can surpass the diffraction limit and achieve near unlimited lifetime. We categorize plasmonic color filters according to their designs (hole, rod, metal-insulator-metal, grating), and also describe structures supported by Mie resonance. We discuss the principles, and the merits and demerits of each color filter. We also discuss a new concept of color filters with tunability and reconfigurability, which enable printing of structural color to yield dynamic coloring at will. Approaches for dynamic coloring are classified as liquid crystal, chemical transition and mechanical deformation. At the end of review, we highlight a scale-up of fabrication methods, including nanoimprinting, self-assembly and laser-induced process that may enable real-world application of structural coloring.
Data analysis using scale-space filtering and Bayesian probabilistic reasoning
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter
1991-01-01
This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.
NASA Astrophysics Data System (ADS)
Hutchison, A. A.; Ghosh, A.
2016-12-01
Very low frequency earthquakes (VLFEs) occur in transitional zones of faults, releasing seismic energy in the 0.02-0.05 Hz frequency band over a 90 s duration and typically have magntitudes within the range of Mw 3.0-4.0. VLFEs can occur down-dip of the seismogenic zone, where they can transfer stress up-dip potentially bringing the locked zone closer to a critical failure stress. VLFEs also occur up-dip of the seismogenic zone in a region along the plate interface that can rupture coseismically during large megathrust events, such as the 2011 Tohoku-Oki earthquake [Ide et al., 2011]. VLFEs were first detected in Cascadia during the 2011 episodic tremor and slip (ETS) event, occurring coincidentally with tremor [Ghosh et al., 2015]. However, during the 2014 ETS event, VLFEs were spatially and temporally asynchronous with tremor activity [Hutchison and Ghosh, 2016]. Such contrasting behaviors remind us that the mechanics behind such events remain elusive, yet they are responsible for the largest portion of the moment release during an ETS event. Here, we apply a match filter method using known VLFEs as template events to detect additional VLFEs. Using a grid-search centroid moment tensor inversion method, we invert stacks of the resulting match filter detections to ensure moment tensor solutions are similar to that of the respective template events. Our ability to successfully employ a match filter method to VLFE detection in Cascadia intrinsically indicates that these events can be repeating, implying that the same asperities are likely responsible for generating multiple VLFEs.
NASA Astrophysics Data System (ADS)
Arevalo-Lopez, H. S.; Levin, S. A.
2016-12-01
The vertical component of seismic wave reflections is contaminated by surface noise such as ground roll and secondary scattering from near surface inhomogeneities. A common method for attenuating these, unfortunately often aliased, arrivals is via velocity filtering and/or multichannel stacking. 3D-3C acquisition technology provides two additional sources of information about the surface wave noise that we exploit here: (1) areal receiver coverage, and (2) a pair of horizontal components recorded at the same location as the vertical component. Areal coverage allows us to segregate arrivals at each individual receiver or group of receivers by direction. The horizontal components, having much less compressional reflection body wave energy than the vertical component, provide a template of where to focus our energies on attenuating the surface wave arrivals. (In the simplest setting, the vertical component is a scaled 90 degree phase rotated version of the radial horizontal arrival, a potential third possible lever we have not yet tried to integrate.) The key to our approach is to use the magnitude of the horizontal components to outline a data-adaptive "velocity" filter region in the w-Kx-Ky domain. The big advantage for us is that even in the presence of uneven receiver geometries, the filter automatically tracks through aliasing without manual sculpting and a priori velocity and dispersion estimation. The method was applied to an aliased synthetic dataset based on a five layer earth model which also included shallow scatterers to simulate near-surface inhomogeneities and successfully removed both the ground roll and scatterers from the vertical component (Figure 1).
Evaluating Internal Model Strength and Performance of Myoelectric Prosthesis Control Strategies.
Shehata, Ahmed W; Scheme, Erik J; Sensinger, Jonathon W
2018-05-01
On-going developments in myoelectric prosthesis control have provided prosthesis users with an assortment of control strategies that vary in reliability and performance. Many studies have focused on improving performance by providing feedback to the user but have overlooked the effect of this feedback on internal model development, which is key to improve long-term performance. In this paper, the strength of internal models developed for two commonly used myoelectric control strategies: raw control with raw feedback (using a regression-based approach) and filtered control with filtered feedback (using a classifier-based approach), were evaluated using two psychometric measures: trial-by-trial adaptation and just-noticeable difference. The performance of both strategies was also evaluated using Schmidt's style target acquisition task. Results obtained from 24 able-bodied subjects showed that although filtered control with filtered feedback had better short-term performance in path efficiency ( ), raw control with raw feedback resulted in stronger internal model development ( ), which may lead to better long-term performance. Despite inherent noise in the control signals of the regression controller, these findings suggest that rich feedback associated with regression control may be used to improve human understanding of the myoelectric control system.
Dier, Tobias K F; Egele, Kerstin; Fossog, Verlaine; Hempelmann, Rolf; Volmer, Dietrich A
2016-01-19
High resolution mass spectrometry was utilized to study the highly complex product mixtures resulting from electrochemical breakdown of lignin. As most of the chemical structures of the degradation products were unknown, enhanced mass defect filtering techniques were implemented to simplify the characterization of the mixtures. It was shown that the implemented ionization techniques had a major impact on the range of detectable breakdown products, with atmospheric pressure photoionization in negative ionization mode providing the widest coverage in our experiments. Different modified Kendrick mass plots were used as a basis for mass defect filtering, where Kendrick mass defect and the mass defect of the lignin-specific guaiacol (C7H7O2) monomeric unit were utilized, readily allowing class assignments independent of the oligomeric state of the product. The enhanced mass defect filtering strategy therefore provided rapid characterization of the sample composition. In addition, the structural similarities between the compounds within a degradation sequence were determined by comparison to a tentatively identified product of this compound series. In general, our analyses revealed that primarily breakdown products with low oxygen content were formed under electrochemical conditions using protic ionic liquids as solvent for lignin.
Liu, Guorui; Yang, Lili; Zhan, Jiayu; Zheng, Minghui; Li, Li; Jin, Rong; Zhao, Yuyang; Wang, Mei
2016-12-01
Cement kilns can be used to co-process fly ash from municipal solid waste incinerators. However, this might increase emission of organic pollutants like polychlorinated biphenyls (PCBs). Knowledge of PCB concentrations and homolog and congener patterns at different stages in this process could be used to assess the possibility of simultaneously controlling emissions of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and "dioxin-like" compounds. To date, emissions from cement kilns co-processing fly ash from municipal solid waste incinerators have not been analyzed for PCBs. In this study, stack gas and particulate samples from two cement kilns co-processing waste incinerator fly ash were analyzed for PCBs. The average total tri- to deca-chlorinated biphenyl (∑ 3-10 PCB) concentration in the stack gas samples was 10.15ngm -3 . The ∑ 3-10 PCB concentration ranges in particulate samples from different stages were 0.83-41.79ngg -1 for cement kiln 1and0.13-1.69ngg -1 for cement kiln 2. The ∑ 3-10 PCB concentrations were much higher in particulate samples from the suspension pre-heater boiler, humidifier tower, and kiln back-end bag filters than in particulate samples from other stages. For these three stages, PCBs contributed to 15-18% of the total PCB, PCDD/F, and polychlorinated naphthalene toxic equivalents in stack gases and particulate matter. The PCB distributions were similar to those found in other studies for PCDD/Fs and polychlorinated naphthalenes, which suggest that it may be possible to simultaneously control emissions of multiple organic pollutants from cement kilns. Homolog patterns in the particulate samples were dominated by the pentachlorobiphenyls. CB-105, CB-118, and CB-123 were the dominant dioxin-like PCB congeners that formed at the back-end of the cement kiln. A mass balance of PCBs in the cement kilns indicated that the total mass of PCBs in the stack gases and clinker was about half the mass of PCBs in the raw materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Auer-Berger, Manuel; Tretnak, Veronika; Wenzl, Franz-Peter; Krenn, Joachim; List-Kratochvil, Emil J. W.
2017-02-01
With the invention of phosphorescent emitter material, organic light emitting diodes with internal quantum yields of up to 100% can be realized. Still, the extraction of the light from the OLED stack is a bottleneck, which hampers the availability of OLEDs with large external quantum efficiencies. In this contribution, we highlight the advantages of integrating aluminum nanodisc arrays into the OLED stack. By this, not only the out-coupling of light can be enhanced, but also the emission color can be tailored and controlled. By means of extinction- and fluorescence spectroscopy measurements we are able to show how the sharp features observed in the extinction measurements correlate with a very selective fluorescence enhancement of the organic emitter materials used in these studies. At the same time, localized surface plasmon resonances of the individual nanodiscs further modify the emission spectrum, e.g., by filtering the green emission tail. A combination of these factors leads to a modification of the emission color in between CIE1931 (x,y) chromaticity coordinates of (0.149, 0.225) and (0.152, 0.352). After accounting for the sensitivity of the human eye, we are able to demonstrate that this adjustment of the chromaticity coordinates goes is accompanied by an increase in device efficiency.
Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl
2007-02-01
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
STEAM PLANT, TRA609. SECTION A SHOWS FEATURES OF NORTH/SOUTH AXIS: ...
STEAM PLANT, TRA-609. SECTION A SHOWS FEATURES OF NORTH/SOUTH AXIS: STEAM GENERATOR AND CATWALK, STACK, DEGREASER FEED WATER HEATER IN PENTHOUSE, MEZZANINE, SURGE TANK PIT (BELOW GROUND LEVEL). UTILITY ROOM SHOWS DIESEL ENGINE GENERATORS, AIR TANKS, STARTING AIR COMPRESSORS. OUTSIDE SOUTH END ARE EXHAUST MUFFLER, AIR INTAKE OIL FILTER, RADIATOR COOLING UNIT, AIR SURGE TANK. SECTION B CROSSES WEST TO EAST NEAR SOUTH END OF BUILDING TO SHOW ARRANGEMENT OF DIESEL ENGINE GENERATOR, AIR DRIER, AFTER COOLER, AIR COMPRESSOR, AND BLOWDOWN TANK. BLAW-KNOX 3150-9-2, 6/1950. INL INDEX NO. 431-0609-00-098-100018, REV. 3. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Dual light field and polarization imaging using CMOS diffractive image sensors.
Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha
2015-05-15
In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
Three-dimensional reconstruction of glycosomes in trypanosomatids of the genus Phytomonas.
Attias, M; de Souza, W
1995-02-01
Computer aided three dimensional (3-D) reconstruction of cells from two isolates of protozoa of the genus Phytomonas, trypanosomatids found in plants, were made from 0.3 microm thick sections, imaged on a Zeiss 902 electron microscope with a energy filter for in ellastically scattered electrons, in order to obtain information about glycosomal shape diversity. Direct counts of peroxisomes (glycosomes) from Phytomonas sp. from Chamaesyce thymifolia indicated that there were fewer glycosomes per cell than the simple count of ultrathin section profiles would suggest and that these organelles could be long and branched. On the other hand, the stacked glycosomes observed in the isolate from Euphorbia characias were small individual structures and no connection was seen between them.
Mode Profiles in Waveguide-Coupled Resonators
NASA Technical Reports Server (NTRS)
Hunt, William D.; Cameron, Tom; Saw, John C. B.; Kim, Yoonkee
1993-01-01
Surface acoustic wave (SAW) waveguide-coupled resonators are of considerable interest for narrow-band filter applications, though to date there has been very little published on the acoustic details of their operation. As in any resonator, one must fully understand its mode structure and herein we study the SAW mode profiles in these devices. Transverse mode profiles in the resonant cavity of the device were measured at various frequencies of interest using a knife-edge laser probe. In addition we predict the mode profiles for the device structure by two independent methods. One is a stack-matrix approach adapted from integrated optics and the other is a conventional analytical eigenmode analysis of the Helmholtz equation. Both modeling techniques are in good agreement with the measured results.
Semi-transparent solar energy thermal storage device
McClelland, John F.
1986-04-08
A visually transmitting solar energy absorbing thermal storage module includes a thermal storage liquid containment chamber defined by an interior solar absorber panel, an exterior transparent panel having a heat mirror surface substantially covering the exterior surface thereof and associated top, bottom and side walls. Evaporation of the thermal storage liquid is controlled by a low vapor pressure liquid layer that floats on and seals the top surface of the liquid. Porous filter plugs are placed in filler holes of the module. An algicide and a chelating compound are added to the liquid to control biological and chemical activity while retaining visual clarity. A plurality of modules may be supported in stacked relation by a support frame to form a thermal storage wall structure.
Semi-transparent solar energy thermal storage device
McClelland, John F.
1985-06-18
A visually transmitting solar energy absorbing thermal storage module includes a thermal storage liquid containment chamber defined by an interior solar absorber panel, an exterior transparent panel having a heat mirror surface substantially covering the exterior surface thereof and associated top, bottom and side walls, Evaporation of the thermal storage liquid is controlled by a low vapor pressure liquid layer that floats on and seals the top surface of the liquid. Porous filter plugs are placed in filler holes of the module. An algicide and a chelating compound are added to the liquid to control biological and chemical activity while retaining visual clarity. A plurality of modules may be supported in stacked relation by a support frame to form a thermal storage wall structure.
Topological Valley Transport in Two-dimensional Honeycomb Photonic Crystals.
Yang, Yuting; Jiang, Hua; Hang, Zhi Hong
2018-01-25
Two-dimensional photonic crystals, in analogy to AB/BA stacking bilayer graphene in electronic system, are studied. Inequivalent valleys in the momentum space for photons can be manipulated by simply engineering diameters of cylinders in a honeycomb lattice. The inequivalent valleys in photonic crystal are selectively excited by a designed optical chiral source and bulk valley polarizations are visualized. Unidirectional valley interface states are proved to exist on a domain wall connecting two photonic crystals with different valley Chern numbers. With the similar optical vortex index, interface states can couple with bulk valley polarizations and thus valley filter and valley coupler can be designed. Our simple dielectric PC scheme can help to exploit the valley degree of freedom for future optical devices.
Han, Don-Hee; Lee, Jinheon
2005-10-01
Korean certification regulation for particulate filtering respirators requires inward leakage (IL) or total inward leakage (TIL) testing according to European Standard EN 13274-1, and the standard levels of compliance are similar to those of the European Standard. This study was conducted to evaluate particulate filtering respirators being commercially used in the Korean market using an IL or TIL test and the validity of standard level in Korea. Three half masks and 10 filtering facepieces (two top class, four 1st class and four 2nd class)-a total of 13 brand name respirators-were selected for the test with panels of 10 subjects. Each subject was classified with nine facial dimension grid squares in accordance with face length and lip length. IL or TIL testing was conducted at the laboratory of the 3M Innovation Center in which the experimental instruments and systems were established in compliance with European standards. The testing procedure followed EN 13274-1 (2001). As expected, leakages of half masks were less than those of filtering facepieces and the latter were significantly different among brands. TILs of the 1st class filtering facepieces were found to be much more than those of the 2nd class and the result may cause a wearer to get confused when selecting a mask. The main route leakage for filtering facepieces may not be the filter medium but the face seal. Therefore, it is necessary to develop well-fitting filtering facepieces for Koreans. Because leakages were significantly different for different facial dimensions, a defined test panel for IL or TIL testing according to country or race should be developed. A more precise method to demonstrate fit, for example, fit testing such as in the US regulations, will be needed before IL or TIL testing or when selecting a respirator. Another finding implies that geometric mean of five exercises for IL or TIL may be better than arithmetic mean to establish a standard individual subject mean.
Real-time detection of transients in OGLE-IV with application of machine learning
NASA Astrophysics Data System (ADS)
Klencki, Jakub; Wyrzykowski, Łukasz
2016-06-01
The current bottleneck of transient detection in most surveys is the problem of rejecting numerous artifacts from detected candidates. We present a triple-stage hierarchical machine learning system for automated artifact filtering in difference imaging, based on self-organizing maps. The classifier, when tested on the OGLE-IV Transient Detection System, accepts 97% of real transients while removing up to 97.5% of artifacts.
Constructing and Classifying Email Networks from Raw Forensic Images
2016-09-01
data mining for sequence and pattern mining ; in medical imaging for image segmentation; and in computer vision for object recognition” [28]. 2.3.1...machine learning and data mining suite that is written in Python. It provides a platform for experiment selection, recommendation systems, and...predictivemod- eling. The Orange library is a hierarchically-organized toolbox of data mining components. Data filtering and probability assessment are at the
Applying Image Matching to Video Analysis
2010-09-01
image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face
Improvement of Liquefiable Foundation Conditions Beneath Existing Structures.
1985-08-01
filter zones, and drains. Drilling fluids can cause hydraulic fracturing . These hazards can lead to to piping and hvdraulic fracturing Compression . 7...with results of piping and hydraulic fracturing (Continued) * Site conditions have been classified into three cases; Case 1 is for beneath -d...which could lead to piping and hydraulic fracturing Soil Reinforcement 16. Vibro-replacement See methods 2 and 3 stone and sand columns applicable to
Object Detection using the Kinect
2012-03-01
Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, S.; Wong, K.V.; Nemerow, N.
Characterization of the following waste streams: air-classified light (ACL), digester slurry, filter cake, filtrate, washwater input and washwater effluent has been made for the Refcom facility in order to assess the effects of these waste streams, if discharged into the environment. Special laboratory studies to evaluate the effect of plastics on anaerobic digestion have been undertaken. A separate report has been furnished describing the studies of lab-model digesters. Data collected for ACL has been statistically analyzed.
Non-invasive Fetal ECG Signal Quality Assessment for Multichannel Heart Rate Estimation.
Andreotti, Fernando; Graser, Felix; Malberg, Hagen; Zaunseder, Sebastian
2017-12-01
The noninvasive fetal ECG (NI-FECG) from abdominal recordings offers novel prospects for prenatal monitoring. However, NI-FECG signals are corrupted by various nonstationary noise sources, making the processing of abdominal recordings a challenging task. In this paper, we present an online approach that dynamically assess the quality of NI-FECG to improve fetal heart rate (FHR) estimation. Using a naive Bayes classifier, state-of-the-art and novel signal quality indices (SQIs), and an existing adaptive Kalman filter, FHR estimation was improved. For the purpose of training and validating the proposed methods, a large annotated private clinical dataset was used. The suggested classification scheme demonstrated an accuracy of Krippendorff's alpha in determining the overall quality of NI-FECG signals. The proposed Kalman filter outperformed alternative methods for FHR estimation achieving accuracy. The proposed algorithm was able to reliably reflect changes of signal quality and can be used in improving FHR estimation. NI-ECG signal quality estimation and multichannel information fusion are largely unexplored topics. Based on previous works, multichannel FHR estimation is a field that could strongly benefit from such methods. The developed SQI algorithms as well as resulting classifier were made available under a GNU GPL open-source license and contributed to the FECGSYN toolbox.
Ashouri, Sajad; Abedi, Mohsen; Abdollahi, Masoud; Dehghan Manshadi, Farideh; Parnianpour, Mohamad; Khalaf, Kinda
2017-10-01
This paper presents a novel approach for evaluating LBP in various settings. The proposed system uses cost-effective inertial sensors, in conjunction with pattern recognition techniques, for identifying sensitive classifiers towards discriminate identification of LB patients. 24 healthy individuals and 28 low back pain patients performed trunk motion tasks in five different directions for validation. Four combinations of these motions were selected based on literature, and the corresponding kinematic data was collected. Upon filtering (4th order, low pass Butterworth filter) and normalizing the data, Principal Component Analysis was used for feature extraction, while Support Vector Machine classifier was applied for data classification. The results reveal that non-linear Kernel classification can be adequately employed for low back pain identification. Our preliminary results demonstrate that using a single inertial sensor placed on the thorax, in conjunction with a relatively simple test protocol, can identify low back pain with an accuracy of 96%, a sensitivity of %100, and specificity of 92%. While our approach shows promising results, further validation in a larger population is required towards using the methodology as a practical quantitative assessment tool for the detection of low back pain in clinical/rehabilitation settings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Agnihotri, Deepak; Verma, Kesari; Tripathi, Priyanka
2016-01-01
The contiguous sequences of the terms (N-grams) in the documents are symmetrically distributed among different classes. The symmetrical distribution of the N-Grams raises uncertainty in the belongings of the N-Grams towards the class. In this paper, we focused on the selection of most discriminating N-Grams by reducing the effects of symmetrical distribution. In this context, a new text feature selection method named as the symmetrical strength of the N-Grams (SSNG) is proposed using a two pass filtering based feature selection (TPF) approach. Initially, in the first pass of the TPF, the SSNG method chooses various informative N-Grams from the entire extracted N-Grams of the corpus. Subsequently, in the second pass the well-known Chi Square (χ(2)) method is being used to select few most informative N-Grams. Further, to classify the documents the two standard classifiers Multinomial Naive Bayes and Linear Support Vector Machine have been applied on the ten standard text data sets. In most of the datasets, the experimental results state the performance and success rate of SSNG method using TPF approach is superior to the state-of-the-art methods viz. Mutual Information, Information Gain, Odds Ratio, Discriminating Feature Selection and χ(2).
Human tracking in thermal images using adaptive particle filters with online random forest learning
NASA Astrophysics Data System (ADS)
Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal
2013-11-01
This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
High brightness diode lasers controlled by volume Bragg gratings
NASA Astrophysics Data System (ADS)
Glebov, Leonid
2017-02-01
Volume Bragg gratings (VBGs) recorded in photo-thermo-refractive (PTR) glass are holographic optical elements that are effective spectral and angular filters withstanding high power laser radiation. Reflecting VBGs are narrow-band spectral filters while transmitting VBGs are narrow-band angular filters. The use of these optical elements in external resonators of semiconductor lasers enables extremely resonant feedback that provides dramatic spectral and angular narrowing of laser diodes radiation without significant power and efficiency penalty. Spectral narrowing of laser diodes by reflecting VBGs demonstrated in wide spectral region from near UV to 3 μm. Commercially available VBGs have spectral width ranged from few nanometers to few tens of picometers. Efficient spectral locking was demonstrated for edge emitters (single diodes, bars, modules, and stacks), vertical cavity surface emitting lasers (VCSELs), grating coupled surface emitting lasers (GCSELs), and interband cascade lasers (ICLs). The use of multiplexed VBGs provides multiwavelength emission from a single emitter. Spectrally locked semiconductor lasers demonstrated CW power from milliwatts to a kilowatt. Angular narrowing by transmitting VBGs enables single transverse mode emission from wide aperture diode lasers having resonators with great Fresnel numbers. This feature provides close to diffraction limit divergence along a slow axis of wide stripe edge emitters. Radiation exchange between lasers by means of spatially profiled or multiplexed VBGs enables coherent combining of diode lasers. Sequence of VBGs or multiplexed VBGs enable spectral combining of spectrally narrowed diode lasers or laser modules. Thus the use of VBGs for diode lasers beam control provides dramatic increase of brightness.
NASA Astrophysics Data System (ADS)
Fereydooni, H.; Mojeddifar, S.
2017-09-01
This study introduced a different procedure to implement matched filtering algorithm (MF) on the ASTER images to obtain the distribution map of alteration minerals in the northwestern part of the Kerman Cenozoic Magmatic Arc (KCMA). This region contains many areas with porphyry copper mineralization such as Meiduk, Abdar, Kader, Godekolvari, Iju, Serenu, Chahfiroozeh and Parkam. Also argillization, sericitization and propylitization are the most common types of hydrothermal alteration in the area. Matched filtering results were provided for alteration minerals with a matched filtering score, called MF image. To identify the pixels which contain only one material (endmember), an appropriate threshold value should be used to the MF image. The chosen threshold classifies a MF image into background and target pixels. This article argues that the current thresholding process (the choice of a threshold) shows misclassification for MF image. To address the issue, this paper introduced the directed matched filtering (DMF) algorithm in which a spectral signature-based filter (SSF) was used instead of the thresholding process. SSF is a user-defined rule package which contains numeral descriptions about the spectral reflectance of alteration minerals. On the other hand, the spectral bands are defined by an upper and lower limit in SSF filter for each alteration minerals. SSF was developed for chlorite, kaolinite, alunite, and muscovite minerals to map alteration zones. The validation proved that, at first: selecting a contiguous range of MF values could not identify desirable results, second: unexpectedly, considerable frequency of pure pixels was observed in the MF scores less than threshold value. Also, the comparison between DMF results and field studies showed an accuracy of 88.51%.
Detection of illicit online sales of fentanyls via Twitter
Mackey, Tim K.; Kalyanam, Janani
2017-01-01
A counterfeit fentanyl crisis is currently underway in the United States. Counterfeit versions of commonly abused prescription drugs laced with fentanyl are being manufactured, distributed, and sold globally, leading to an increase in overdose and death in countries like the United States and Canada. Despite concerns from the U.S. Drug Enforcement Agency regarding covert and overt sale of fentanyls online, no study has examined the role of the Internet and social media on fentanyl illegal marketing and direct-to-consumer access. In response, this study collected and analyzed five months of Twitter data (from June-November 2015) filtered for the keyword “fentanyl” using Amazon Web Services. We then analyzed 28,711 fentanyl-related tweets using text filtering and a machine learning approach called a Biterm Topic Model (BTM) to detect underlying latent patterns or “topics” present in the corpus of tweets. Using this approach we detected a subset of 771 tweets marketing the sale of fentanyls online and then filtered this down to nine unique tweets containing hyperlinks to external websites. Six hyperlinks were associated with online fentanyl classified ads, 2 with illicit online pharmacies, and 1 could not be classified due to traffic redirection. Importantly, the one illicit online pharmacy detected was still accessible and offered the sale of fentanyls and other controlled substances direct-to-consumers with no prescription required at the time of publication of this study. Overall, we detected a relatively small sample of Tweets promoting illegal online sale of fentanyls. However, the detection of even a few online sellers represents a public health danger and a direct violation of law that demands further study. PMID:29259769
Detection of illicit online sales of fentanyls via Twitter.
Mackey, Tim K; Kalyanam, Janani
2017-01-01
A counterfeit fentanyl crisis is currently underway in the United States. Counterfeit versions of commonly abused prescription drugs laced with fentanyl are being manufactured, distributed, and sold globally, leading to an increase in overdose and death in countries like the United States and Canada. Despite concerns from the U.S. Drug Enforcement Agency regarding covert and overt sale of fentanyls online, no study has examined the role of the Internet and social media on fentanyl illegal marketing and direct-to-consumer access. In response, this study collected and analyzed five months of Twitter data (from June-November 2015) filtered for the keyword "fentanyl" using Amazon Web Services. We then analyzed 28,711 fentanyl-related tweets using text filtering and a machine learning approach called a Biterm Topic Model (BTM) to detect underlying latent patterns or "topics" present in the corpus of tweets. Using this approach we detected a subset of 771 tweets marketing the sale of fentanyls online and then filtered this down to nine unique tweets containing hyperlinks to external websites. Six hyperlinks were associated with online fentanyl classified ads, 2 with illicit online pharmacies, and 1 could not be classified due to traffic redirection. Importantly, the one illicit online pharmacy detected was still accessible and offered the sale of fentanyls and other controlled substances direct-to-consumers with no prescription required at the time of publication of this study. Overall, we detected a relatively small sample of Tweets promoting illegal online sale of fentanyls. However, the detection of even a few online sellers represents a public health danger and a direct violation of law that demands further study.
Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images
Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle
2008-01-01
The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799
CRF: detection of CRISPR arrays using random forest.
Wang, Kai; Liang, Chun
2017-01-01
CRISPRs (clustered regularly interspaced short palindromic repeats) are particular repeat sequences found in wide range of bacteria and archaea genomes. Several tools are available for detecting CRISPR arrays in the genomes of both domains. Here we developed a new web-based CRISPR detection tool named CRF (CRISPR Finder by Random Forest). Different from other CRISPR detection tools, a random forest classifier was used in CRF to filter out invalid CRISPR arrays from all putative candidates and accordingly enhanced detection accuracy. In CRF, particularly, triplet elements that combine both sequence content and structure information were extracted from CRISPR repeats for classifier training. The classifier achieved high accuracy and sensitivity. Moreover, CRF offers a highly interactive web interface for robust data visualization that is not available among other CRISPR detection tools. After detection, the query sequence, CRISPR array architecture, and the sequences and secondary structures of CRISPR repeats and spacers can be visualized for visual examination and validation. CRF is freely available at http://bioinfolab.miamioh.edu/crf/home.php.
Uses and misuses of Bayes' rule and Bayesian classifiers in cybersecurity
NASA Astrophysics Data System (ADS)
Bard, Gregory V.
2017-12-01
This paper will discuss the applications of Bayes' Rule and Bayesian Classifiers in Cybersecurity. While the most elementary form of Bayes' rule occurs in undergraduate coursework, there are more complicated forms as well. As an extended example, Bayesian spam filtering is explored, and is in many ways the most triumphant accomplishment of Bayesian reasoning in computer science, as nearly everyone with an email address has a spam folder. Bayesian Classifiers have also been responsible significant cybersecurity research results; yet, because they are not part of the standard curriculum, few in the mathematics or information-technology communities have seen the exact definitions, requirements, and proofs that comprise the subject. Moreover, numerous errors have been made by researchers (described in this paper), due to some mathematical misunderstandings dealing with conditional independence, or other badly chosen assumptions. Finally, to provide instructors and researchers with real-world examples, 25 published cybersecurity papers that use Bayesian reasoning are given, with 2-4 sentence summaries of the focus and contributions of each paper.
Wu, Allison Chia-Yi; Rifkin, Scott A
2015-03-27
Recent techniques for tagging and visualizing single molecules in fixed or living organisms and cell lines have been revolutionizing our understanding of the spatial and temporal dynamics of fundamental biological processes. However, fluorescence microscopy images are often noisy, and it can be difficult to distinguish a fluorescently labeled single molecule from background speckle. We present a computational pipeline to distinguish the true signal of fluorescently labeled molecules from background fluorescence and noise. We test our technique using the challenging case of wide-field, epifluorescence microscope image stacks from single molecule fluorescence in situ experiments on nematode embryos where there can be substantial out-of-focus light and structured noise. The software recognizes and classifies individual mRNA spots by measuring several features of local intensity maxima and classifying them with a supervised random forest classifier. A key innovation of this software is that, by estimating the probability that each local maximum is a true spot in a statistically principled way, it makes it possible to estimate the error introduced by image classification. This can be used to assess the quality of the data and to estimate a confidence interval for the molecule count estimate, all of which are important for quantitative interpretations of the results of single-molecule experiments. The software classifies spots in these images well, with >95% AUROC on realistic artificial data and outperforms other commonly used techniques on challenging real data. Its interval estimates provide a unique measure of the quality of an image and confidence in the classification.
Estimating local scaling properties for the classification of interstitial lung disease patterns
NASA Astrophysics Data System (ADS)
Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel
2011-03-01
Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.
Polarizing properties and structure of the cuticle of scarab beetles from the Chrysina genus
NASA Astrophysics Data System (ADS)
Fernández del Río, Lía; Arwin, Hans; Järrendahl, Kenneth
2016-07-01
The optical properties of several scarab beetles have been previously studied but few attempts have been made to compare beetles in the same genus. To determine whether there is any relation between specimens of the same genus, we have studied and classified seven species from the Chrysina genus. The polarization properties were analyzed with Mueller-matrix spectroscopic ellipsometry and the structural characteristics with optical microscopy and scanning electron microscopy. Most of the Chrysina beetles are green colored or have a metallic look (gold or silver). The results show that the green-colored beetles polarize reflected light mainly at off-specular angles. The gold-colored beetles polarize light left-handed near circular at specular reflection. The structure of the exoskeleton is a stack of layers that form a cusplike structure in the green beetles whereas the layers are parallel to the surface in the case of the gold-colored beetles. The beetle C. gloriosa is green with gold-colored stripes along the elytras and exhibits both types of effects. The results indicate that Chrysina beetles can be classified according to these two major polarization properties.
NASA Astrophysics Data System (ADS)
Sawada, Hiroshi; Daykin, Tyler; Bauer, Bruno; Beg, Farhat
2017-10-01
We have developed an experimental platform to study material properties of magnetically compressed cylinder using a 1 MA pulsed power generator Zebra and a 50 TW subpicosecond short-pulse laser Leopard at the UNR's Nevada Terawatt Facility. According to a MHD simulation, strong magnetic fields generated by 100 ns rise time Zebra current can quasi-isentropically compress a material to the strongly coupled plasma regime. Taking advantage of the cylindrical geometry, a metal rod can be brought to higher pressures than that in the planar geometry. To diagnose the compressed rod with high precision x-ray measurements, an initial laser-only experiment was carried out to characterize laser-produced x-rays. Interaction of a high-intensity, short-pulse laser with solids produces broadband and monochromatic x-rays with photon energies high enough to probe dense metal rods. Bremsstrahlung was measured with Imaging plate-based filter stack spectrometers and monochromatic 8.0 keV Cu K-alpha was recorded with an absolutely calibrated Bragg crystal spectrometer. The broadband x-ray source was applied to radiography of thick metal objects and different filter materials were tested. The experimental results and a design of a coupled experiment will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aseev, P., E-mail: pavel.aseev@isom.upm.es, E-mail: gacevic@isom.upm.es; Gačević, Ž., E-mail: pavel.aseev@isom.upm.es, E-mail: gacevic@isom.upm.es; Calleja, E.
2016-06-20
Series of GaN nanowires (NW) with controlled diameters (160–500 nm) and heights (420–1100 nm) were homoepitaxially grown on three different templates: GaN/Si(111), GaN/AlN/Si(111), and GaN/sapphire(0001). Transmission electron microscopy reveals a strong influence of the NW diameter on dislocation filtering effect, whereas photoluminescence measurements further relate this effect to the GaN NWs near-bandgap emission efficiency. Although the templates' quality has some effects on the GaN NWs optical and structural properties, the NW diameter reduction drives the dislocation filtering effect to the point where a poor GaN template quality becomes negligible. Thus, by a proper optimization of the homoepitaxial GaN NWs growth, the propagationmore » of dislocations into the NWs can be greatly prevented, leading to an exceptional crystal quality and a total dominance of the near-bandgap emission over sub-bandgap, defect-related lines, such as basal stacking faults and so called unknown exciton (UX) emission. In addition, a correlation between the presence of polarity inversion domain boundaries and the UX emission lines around 3.45 eV is established.« less
NASA Technical Reports Server (NTRS)
Zukic, Muamer; Torr, Douglas G.
1993-01-01
The application of thin film technology to the vacuum ultraviolet (VUV) wavelength region from 120 nm to 230 nm has not been fully exploited in the past because of absorption effects which complicate the accurate determination of the optical functions of dielectric materials. The problem therefore reduces to that of determining the real and imaginary parts of a complex optical function, namely the frequency dependent refractive index n and extinction coefficient k. We discuss techniques for the inverse retrieval of n and k for dielectric materials at VUV wavelengths from measurements of their reflectance and transmittance. Suitable substrate and film materials are identified for application in the VUV. Such applications include coatings for the fabrication of narrow and broadband filters and beamsplitters. The availability of such devices open the VUV regime to high resolution photometry, interferometry and polarimetry both for space based and laboratory applications. This chapter deals with the optics of absorbing multilayers, the determination of the optical functions for several useful materials, and the design of VUV multilayer stacks as applied to the design of narrow and broadband reflection and transmission filters and beamsplitters. Experimental techniques are discussed briefly, and several examples of the optical functions derived for selected materials are presented.
Seismic Linear Noise Attenuation with Use of Radial Transform
NASA Astrophysics Data System (ADS)
Szymańska-Małysa, Żaneta
2018-03-01
One of the goals of seismic data processing is to attenuate the recorded noise in order to enable correct interpretation of the image. Radial transform has been used as a very effective tool in the attenuation of various types of linear noise, both numerical and real (such as ground roll, direct waves, head waves, guided waves etc). The result of transformation from offset - time (X - T) domain into apparent velocity - time (R - T) domain is frequency separation between reflections and linear events. In this article synthetic and real seismic shot gathers were examined. One example was targeted at far offset area of dataset where reflections and noise had similar apparent velocities and frequency bands. Another example was a result of elastic modelling where linear artefacts were produced. Bandpass filtering and scaling operation executed in radial domain attenuated all discussed types of linear noise very effectively. After noise reduction all further processing steps reveal better results, especially velocity analysis, migration and stacking. In all presented cases signal-to-noise ratio was significantly increased and reflections covered previously by noise were revealed. Power spectra of filtered seismic records preserved real dynamics of reflections.
Model of biological quantum logic in DNA.
Mihelic, F Matthew
2013-08-02
The DNA molecule has properties that allow it to act as a quantum logic processor. It has been demonstrated that there is coherent conduction of electrons longitudinally along the DNA molecule through pi stacking interactions of the aromatic nucleotide bases, and it has also been demonstrated that electrons moving longitudinally along the DNA molecule are subject to a very efficient electron spin filtering effect as the helicity of the DNA molecule interacts with the spin of the electron. This means that, in DNA, electrons are coherently conducted along a very efficient spin filter. Coherent electron spin is held in a logically and thermodynamically reversible chiral symmetry between the C2-endo and C3-endo enantiomers of the deoxyribose moiety in each nucleotide, which enables each nucleotide to function as a quantum gate. The symmetry break that provides for quantum decision in the system is determined by the spin direction of an electron that has an orbital angular momentum that is sufficient to overcome the energy barrier of the double well potential separating the C2-endo and C3-endo enantiomers, and that enantiomeric energy barrier is appropriate to the Landauer limit of the energy necessary to randomize one bit of information.
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms
Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan
2017-01-01
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909
Kim, Jongin; Lee, Boreom
2018-05-07
Different modalities such as structural MRI, FDG-PET, and CSF have complementary information, which is likely to be very useful for diagnosis of AD and MCI. Therefore, it is possible to develop a more effective and accurate AD/MCI automatic diagnosis method by integrating complementary information of different modalities. In this paper, we propose multi-modal sparse hierarchical extreme leaning machine (MSH-ELM). We used volume and mean intensity extracted from 93 regions of interest (ROIs) as features of MRI and FDG-PET, respectively, and used p-tau, t-tau, and Aβ42 as CSF features. In detail, high-level representation was individually extracted from each of MRI, FDG-PET, and CSF using a stacked sparse extreme learning machine auto-encoder (sELM-AE). Then, another stacked sELM-AE was devised to acquire a joint hierarchical feature representation by fusing the high-level representations obtained from each modality. Finally, we classified joint hierarchical feature representation using a kernel-based extreme learning machine (KELM). The results of MSH-ELM were compared with those of conventional ELM, single kernel support vector machine (SK-SVM), multiple kernel support vector machine (MK-SVM) and stacked auto-encoder (SAE). Performance was evaluated through 10-fold cross-validation. In the classification of AD vs. HC and MCI vs. HC problem, the proposed MSH-ELM method showed mean balanced accuracies of 96.10% and 86.46%, respectively, which is much better than those of competing methods. In summary, the proposed algorithm exhibits consistently better performance than SK-SVM, ELM, MK-SVM and SAE in the two binary classification problems (AD vs. HC and MCI vs. HC). © 2018 Wiley Periodicals, Inc.
Davila, Juan Carlos; Cretu, Ana-Maria; Zaremba, Marek
2017-06-07
The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.
NASA Astrophysics Data System (ADS)
Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin
2010-10-01
It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.
Xing, Rong; Zhou, Lijun; Xie, Lin; Hao, Kun; Rao, Tai; Wang, Qian; Ye, Wei; Fu, Hanxu; Wang, Xinwen; Wang, Guangji; Liang, Yan
2015-03-31
The present work contributes to the development of a powerful technical platform to rapidly identify and classify complicated components and metabolites for traditional Chinese medicines. In this process, notoginsenosides, the main active ingredients in Panaxnotoginseng, were chosen as model compounds. Firstly, the fragmental patterns, diagnostic product ions and neutral loss of each subfamily of notoginsenosides were summarized by collision-induced dissociation analysis of representative authentic standards. Next, in order to maximally cover low-concentration components which could otherwise be omitted from previous diagnostic fragment-ion method using only single product ion of notoginsenosides, a multiple product ions filtering strategy was proposed and utilized to identify and classify both non-target and target notoginsenosides of P.notoginseng extract (in vitro). With this strategy, 13 protopanaxadiol-type notoginsenosides and 30 protopanaxatriol-type notoginsenosides were efficiently extracted. Then, a neutral loss filtering technique was employed to trace prototype components and metabolites in rats (in vivo) since diagnostic product ions might shift therefore become unpredictable when metabolic reactions occurred on the mother skeleton of notoginsenosides. After comparing the constitute profiles in vitro with in vivo, 62 drug-related components were identified from rat feces, and these components were classified into 27 prototype compounds and 35 metabolites. Lastly, all the metabolites were successfully correlated to their parent compounds based on chemicalome-metabolome matching approach which was previously built by our group. This study provided a generally applicable approach to global metabolite identification for the complicated components in complex matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
NASA Astrophysics Data System (ADS)
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.
NASA Astrophysics Data System (ADS)
Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo
This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.
A novel deep learning approach for classification of EEG motor imagery signals.
Tabar, Yousef Rezaei; Halici, Ugur
2017-02-01
Signal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals. In this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals. A new form of input is introduced to combine time, frequency and location information extracted from EEG signal and it is used in CNN having one 1D convolutional and one max-pooling layers. We also proposed a new deep network by combining CNN and SAE. In this network, the features that are extracted in CNN are classified through the deep network SAE. The classification performance obtained by the proposed method on BCI competition IV dataset 2b in terms of kappa value is 0.547. Our approach yields 9% improvement over the winner algorithm of the competition. Our results show that deep learning methods provide better classification performance compared to other state of art approaches. These methods can be applied successfully to BCI systems where the amount of data is large due to daily recording.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-01-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers. PMID:24098863
NASA Astrophysics Data System (ADS)
Huo, Xiaoming; Elad, Michael; Flesia, Ana G.; Muise, Robert R.; Stanfill, S. Robert; Friedman, Jerome; Popescu, Bogdan; Chen, Jihong; Mahalanobis, Abhijit; Donoho, David L.
2003-09-01
In target recognition applications of discriminant of classification analysis, each 'feature' is a result of a convolution of an imagery with a filter, which may be derived from a feature vector. It is important to use relatively few features. We analyze an optimal reduced-rank classifier under the two-class situation. Assuming each population is Gaussian and has zero mean, and the classes differ through the covariance matrices: ∑1 and ∑2. The following matrix is considered: Λ=(∑1+∑2)-1/2∑1(∑1+∑2)-1/2. We show that the k eigenvectors of this matrix whose eigenvalues are most different from 1/2 offer the best rank k approximation to the maximum likelihood classifier. The matrix Λ and its eigenvectors have been introduced by Fukunaga and Koontz; hence this analysis gives a new interpretation of the well known Fukunaga-Koontz transform. The optimality that is promised in this method hold if the two populations are exactly Guassian with the same means. To check the applicability of this approach to real data, an experiment is performed, in which several 'modern' classifiers were used on an Infrared ATR data. In these experiments, a reduced-rank classifier-Tuned Basis Functions-outperforms others. The competitive performance of the optimal reduced-rank quadratic classifier suggests that, at least for classification purposes, the imagery data behaves in a nearly-Gaussian fashion.
Design and Implementation of an Operations Module for the ARGOS paperless Ship System
1989-06-01
A. OPERATIONS STACK SCRIPTS SCRIPTS FOR STACK: operations * BACKGROUND #1: Operations * on openStack hide message box show menuBar pass openStack end... openStack ** CARD #1, BUTTON #1: Up ***** on mouseUp visual effect zoom out go to card id 10931 of stack argos end mouseUp ** CARD #1, BUTTON #2...STACK SCRIPTS SCRIPTS FOR STACK: Reports ** BACKGROUND #1: Operations * on openStack hie message box show menuBar pass openStack end openStack ** CARD #1
Mining adverse drug reactions from online healthcare forums using hidden Markov model.
Sampathkumar, Hariprasad; Chen, Xue-wen; Luo, Bo
2014-10-23
Adverse Drug Reactions are one of the leading causes of injury or death among patients undergoing medical treatments. Not all Adverse Drug Reactions are identified before a drug is made available in the market. Current post-marketing drug surveillance methods, which are based purely on voluntary spontaneous reports, are unable to provide the early indications necessary to prevent the occurrence of such injuries or fatalities. The objective of this research is to extract reports of adverse drug side-effects from messages in online healthcare forums and use them as early indicators to assist in post-marketing drug surveillance. We treat the task of extracting adverse side-effects of drugs from healthcare forum messages as a sequence labeling problem and present a Hidden Markov Model(HMM) based Text Mining system that can be used to classify a message as containing drug side-effect information and then extract the adverse side-effect mentions from it. A manually annotated dataset from http://www.medications.com is used in the training and validation of the HMM based Text Mining system. A 10-fold cross-validation on the manually annotated dataset yielded on average an F-Score of 0.76 from the HMM Classifier, in comparison to 0.575 from the Baseline classifier. Without the Plain Text Filter component as a part of the Text Processing module, the F-Score of the HMM Classifier was reduced to 0.378 on average, while absence of the HTML Filter component was found to have no impact. Reducing the Drug names dictionary size by half, on average reduced the F-Score of the HMM Classifier to 0.359, while a similar reduction to the side-effects dictionary yielded an F-Score of 0.651 on average. Adverse side-effects mined from http://www.medications.com and http://www.steadyhealth.com were found to match the Adverse Drug Reactions on the Drug Package Labels of several drugs. In addition, some novel adverse side-effects, which can be potential Adverse Drug Reactions, were also identified. The results from the HMM based Text Miner are encouraging to pursue further enhancements to this approach. The mined novel side-effects can act as early indicators for health authorities to help focus their efforts in post-marketing drug surveillance.
SaRAD: a Simple and Robust Abbreviation Dictionary.
Adar, Eytan
2004-03-01
Due to recent interest in the use of textual material to augment traditional experiments it has become necessary to automatically cluster, classify and filter natural language information. The Simple and Robust Abbreviation Dictionary (SaRAD) provides an easy to implement, high performance tool for the construction of a biomedical symbol dictionary. The algorithms, applied to the MEDLINE document set, result in a high quality dictionary and toolset to disambiguate abbreviation symbols automatically.
Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin
2016-01-01
Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.
Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
Zhang, Sirou; Qiao, Xiaoya
2017-01-01
In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311
NASA Astrophysics Data System (ADS)
Liu, Chanjuan; van Netten, Jaap J.; Klein, Marvin E.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi
2013-12-01
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.
Enzyme-linked immunoassay for dengue virus IgM and IgG antibodies in serum and filter paper blood.
Tran, Thanh Nga T; de Vries, Peter J; Hoang, Lan Phuong; Phan, Giao T; Le, Hung Q; Tran, Binh Q; Vo, Chi Mai T; Nguyen, Nam V; Kager, Piet A; Nagelkerke, Nico; Groen, Jan
2006-01-25
The reproducibilty of dengue IgM and IgG ELISA was studied in serum and filter paper blood spots from Vietnamese febrile patients. 781 pairs of acute (t0) and convalescent sera, obtained after three weeks (t3) and 161 corresponding pairs of filter paper blood spots were tested with ELISA for dengue IgG and IgM. 74 serum pairs were tested again in another laboratory with similar methods, after a mean of 252 days. Cases were classified as no dengue (10 %), past dengue (55%) acute primary (7%) or secondary (28%) dengue. Significant differences between the two laboratories' results were found leading to different diagnostic classification (kappa 0.46, p < 0.001). Filter paper results correlated poorly to serum values, being more variable and lower with a mean (95% CI) difference of 0.82 (0.36 to 1.28) for IgMt3, 0.94 (0.51 to 1.37) for IgGt0 and 0.26 (-0.20 to 0.71) for IgGt3. This also led to differences in diagnostic classification (kappa value 0.44, p < 0.001) The duration of storage of frozen serum and dried filter papers, sealed in nylon bags in an air-conditioned room, had no significant effect on the ELISA results. Dengue virus IgG antibodies in serum and filter papers was not affected by duration of storage, but was subject to inter-laboratory variability. Dengue virus IgM antibodies measured in serum reconstituted from blood spots on filter papers were lower than in serum, in particular in the acute phase of disease. Therefore this method limits its value for diagnostic confirmation of individual patients with dengue virus infections. However the detection of dengue virus IgG antibodies eluted from filter paper can be used for sero-prevalence cross sectional studies.