NASA Astrophysics Data System (ADS)
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
Chen, Haibin; Yang, Yan; Jiang, Wei; Song, Mengjie; Wang, Ying; Xiang, Tiantian
2017-02-01
A case study on the source separation of municipal solid waste (MSW) was performed in Changsha, the capital city of Hunan Province, China. The objective of this study is to analyze the effects of different separation methods and compare their effects with citizens' attitudes and inclination. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. A large-scale questionnaire survey was conducted to determine citizens' attitudes and inclination toward source separation. Survey result shows that the vast majority of respondents hold consciously positive attitudes toward participation in source separation. Moreover, the respondents ignore the operability of separation methods and would rather choose the complex separation method involving four or more subclassed categories. For the effects of separation methods, the site experiment result demonstrates that the relatively simple separation method involving two categories (food waste and other waste) achieves the best effect with the highest accuracy rate (83.1%) and the lowest miscellany rate (16.9%) among the proposed experimental alternatives. The outcome reflects the inconsistency between people's environmental awareness and behavior. Such inconsistency and conflict may be attributed to the lack of environmental knowledge. Environmental education is assumed to be a fundamental solution to improve the effect of source separation of MSW in Changsha. Important management tips on source separation, including the reformation of the current pay-as-you-throw (PAYT) system, are presented in this work. A case study on the source separation of municipal solid waste was performed in Changsha. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. The site experiment result demonstrates that the two-category (food waste and other waste) method achieves the best effect. The inconsistency between people's inclination and the effect of source separation exists. The proposed method can be expanded to other cities to determine the most effective separation method during planning stages or to evaluate the performance of running source separation systems.
Disturbance Source Separation in Shear Flows Using Blind Source Separation Methods
NASA Astrophysics Data System (ADS)
Gluzman, Igal; Cohen, Jacob; Oshman, Yaakov
2017-11-01
A novel approach is presented for identifying disturbance sources in wall-bounded shear flows. The method can prove useful for active control of boundary layer transition from laminar to turbulent flow. The underlying idea is to consider the flow state, as measured in sensors, to be a mixture of sources, and to use Blind Source Separation (BSS) techniques to recover the separate sources and their unknown mixing process. We present a BSS method based on the Degenerate Unmixing Estimation Technique. This method can be used to identify any (a priori unknown) number of sources by using the data acquired by only two sensors. The power of the new method is demonstrated via numerical and experimental proofs of concept. Wind tunnel experiments involving boundary layer flow over a flat plate were carried out, in which two hot-wire anemometers were used to separate disturbances generated by disturbance generators such as a single dielectric barrier discharge plasma actuator and a loudspeaker.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
Ion current detector for high pressure ion sources for monitoring separations
Smith, R.D.; Wahl, J.H.; Hofstadler, S.A.
1996-08-13
The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source. 7 figs.
Ion current detector for high pressure ion sources for monitoring separations
Smith, Richard D.; Wahl, Jon H.; Hofstadler, Steven A.
1996-01-01
The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source.
Perceptually controlled doping for audio source separation
NASA Astrophysics Data System (ADS)
Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT
2014-12-01
The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.
Liang, Yong [Richland, WA; Daschbach, John L [Richland, WA; Su, Yali [Richland, WA; Chambers, Scott A [Kennewick, WA
2006-08-22
A method for producing quantum dots. The method includes cleaning an oxide substrate and separately cleaning a metal source. The substrate is then heated and exposed to the source in an oxygen environment. This causes metal oxide quantum dots to form on the surface of the substrate.
Liang, Yong [Richland, WA; Daschbach, John L [Richland, WA; Su, Yali [Richland, WA; Chambers, Scott A [Kennewick, WA
2003-03-18
A method for producing quantum dots. The method includes cleaning an oxide substrate and separately cleaning a metal source. The substrate is then heated and exposed to the source in an oxygen environment. This causes metal oxide quantum dots to form on the surface of the substrate.
A blind source separation approach for humpback whale song separation.
Zhang, Zhenbin; White, Paul R
2017-04-01
Many marine mammal species are highly social and are frequently encountered in groups or aggregations. When conducting passive acoustic monitoring in such circumstances, recordings commonly contain vocalizations of multiple individuals which overlap in time and frequency. This paper considers the use of blind source separation as a method for processing these recordings to separate the calls of individuals. The example problem considered here is that of the songs of humpback whales. The high levels of noise and long impulse responses can make source separation in underwater contexts a challenging proposition. The approach present here is based on time-frequency masking, allied to a noise reduction process. The technique is assessed using simulated and measured data sets, and the results demonstrate the effectiveness of the method for separating humpback whale songs.
Cohen, Michael X; Gulbinaite, Rasa
2017-02-15
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.
Method and Device for Extraction of Liquids from a Solid Particle Material
NASA Technical Reports Server (NTRS)
deMayo, Benjamin (Inventor)
2017-01-01
A method, system, and device for separating oil from oil sands or oil shale is disclosed. The method includes heating the oil sands, spinning the heated oil sands, confining the sand particles mechanically, and recovering the oil substantially free of the sand. The method can be used without the addition of chemical extraction agents. The system includes a source of centrifugal force, a heat source, a separation device, and a recovery device. The separation device includes a method of confining the sands while allowing the oil to escape, such as through an aperture.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2016-12-01
Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. These new instruments require novel approaches for processing imagery and separating surface and atmospheric signals. One approach is numerical source separation, which allows the determination of the underlying physical causes of observed signals. Improved source separation will enable hyperspectral imagery to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. We developed an Informed Non-negative Matrix Factorization (INMF) method for separating atmospheric and surface sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. We also explore methods to produce an initial guess of the spatial separation patterns. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO) with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric conditions, including high and low aerosol optical thickness and cloud cover, with only minor contributions from the ocean surfaces in order to isolate the contributions of the multiple atmospheric sources.
Cohen, Michael X
2017-09-27
The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Arechiga, Rene O.
2016-01-01
Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.
Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.
Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements
NASA Astrophysics Data System (ADS)
Taulu, S.; Simola, J.
2006-04-01
Limitations of traditional magnetoencephalography (MEG) exclude some important patient groups from MEG examinations, such as epilepsy patients with a vagus nerve stimulator, patients with magnetic particles on the head or having magnetic dental materials that cause severe movement-related artefact signals. Conventional interference rejection methods are not able to remove the artefacts originating this close to the MEG sensor array. For example, the reference array method is unable to suppress interference generated by sources closer to the sensors than the reference array, about 20-40 cm. The spatiotemporal signal space separation method proposed in this paper recognizes and removes both external interference and the artefacts produced by these nearby sources, even on the scalp. First, the basic separation into brain-related and external interference signals is accomplished with signal space separation based on sensor geometry and Maxwell's equations only. After this, the artefacts from nearby sources are extracted by a simple statistical analysis in the time domain, and projected out. Practical examples with artificial current dipoles and interference sources as well as data from real patients demonstrate that the method removes the artefacts without altering the field patterns of the brain signals.
Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.
Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.
2011-01-01
Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724
Sound field separation with sound pressure and particle velocity measurements.
Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin
2012-12-01
In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Time-dependent wave splitting and source separation
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck
2017-02-01
Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.
NASA Astrophysics Data System (ADS)
Zhou, Kenneth J.; Chen, Jun
2014-03-01
The fluorophores of malignant human breast cells change their compositions that may be exposed in the fluorescence spectroscopy and blind source separation method. The content of the fluorophores mixture media such as tryptophan, collagen, elastin, NADH, and flavin were varied according to the cancer development. The native fluorescence spectra of these key fluorophores mixture media excited by the selective excitation wavelengths of 300 nm and 340 nm were analyzed using a blind source separation method: Nonnegative Matrix Factorization (NMF). The results show that the contribution from tryptophan, NADH and flavin to the fluorescence spectra of the mixture media is proportional to the content of each fluorophore. These data present a possibility that native fluorescence spectra decomposed by NMF can be used as potential native biomarkers for cancer detection evaluation of the cancer.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón
2014-01-15
Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
Seismoelectric data processing for surface surveys of shallow targets
Haines, S.S.; Guitton, A.; Biondi, B.
2007-01-01
The utility of the seismoelectric method relies on the development of methods to extract the signal of interest from background and source-generated coherent noise that may be several orders-of-magnitude stronger. We compare data processing approaches to develop a sequence of preprocessing and signal/noise separation and to quantify the noise level from which we can extract signal events. Our preferred sequence begins with the removal of power line harmonic noise and the use of frequency filters to minimize random and source-generated noise. Mapping to the linear Radon domain with an inverse process incorporating a sparseness constraint provides good separation of signal from noise, though it is ineffective on noise that shows the same dip as the signal. Similarly, the seismoelectric signal and noise do not separate cleanly in the Fourier domain, so f-k filtering can not remove all of the source-generated noise and it also disrupts signal amplitude patterns. We find that prediction-error filters provide the most effective method to separate signal and noise, while also preserving amplitude information, assuming that adequate pattern models can be determined for the signal and noise. These Radon-domain and prediction-error-filter methods successfully separate signal from <33 dB stronger noise in our test data. ?? 2007 Society of Exploration Geophysicists.
Classical-processing and quantum-processing signal separation methods for qubit uncoupling
NASA Astrophysics Data System (ADS)
Deville, Yannick; Deville, Alain
2012-12-01
The Blind Source Separation problem consists in estimating a set of unknown source signals from their measured combinations. It was only investigated in a non-quantum framework up to now. We propose its first quantum extensions. We thus introduce the Quantum Source Separation field, investigating both its blind and non-blind configurations. More precisely, we show how to retrieve individual quantum bits (qubits) only from the global state resulting from their undesired coupling. We consider cylindrical-symmetry Heisenberg coupling, which e.g. occurs when two electron spins interact through exchange. We first propose several qubit uncoupling methods which typically measure repeatedly the coupled quantum states resulting from individual qubits preparations, and which then statistically process the classical data provided by these measurements. Numerical tests prove the effectiveness of these methods. We then derive a combination of quantum gates for performing qubit uncoupling, thus avoiding repeated qubit preparations and irreversible measurements.
Review of chemical separation techniques applicable to alpha spectrometric measurements
NASA Astrophysics Data System (ADS)
de Regge, P.; Boden, R.
1984-06-01
Prior to alpha-spectrometric measurements several chemical manipulations are usually required to obtain alpha-radiating sources with the desired radiochemical and chemical purity. These include sampling, dissolution or leaching of the elements of interest, conditioning of the solution, chemical separation and preparation of the alpha-emitting source. The choice of a particular method is dependent on different criteria but always involves aspects of the selectivity or the quantitative nature of the separations. The availability of suitable tracers or spikes and modern high resolution instruments resulted in the wide-spread application of isotopic dilution techniques to the problems associated with quantitative chemical separations. This enhanced the development of highly elective methods and reagents which led to important simplifications in the separation schemes. The chemical separation methods commonly used in connection with alpha-spectrometric measurements involve precipitation with selected scavenger elements, solvent extraction, ion exchange and electrodeposition techniques or any combination of them. Depending on the purpose of the final measurement and the type of sample available the chemical separation methods have to be adapted to the particular needs of environment monitoring, nuclear chemistry and metrology, safeguards and safety, waste management and requirements in the nuclear fuel cycle. Against the background of separation methods available in the literature the present paper highlights the current developments and trends in the chemical techniques applicable to alpha spectrometry.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Method of forming emitters for a back-contact solar cell
Li, Bo; Cousins, Peter J.; Smith, David D.
2015-09-29
Methods of forming emitters for back-contact solar cells are described. In one embodiment, a method includes forming a first solid-state dopant source above a substrate. The first solid-state dopant source includes a plurality of regions separated by gaps. Regions of a second solid-state dopant source are formed above the substrate by printing.
Method of forming emitters for a back-contact solar cell
Li, Bo; Cousins, Peter J; Smith, David D
2014-12-16
Methods of forming emitters for back-contact solar cells are described. In one embodiment, a method includes forming a first solid-state dopant source above a substrate. The first solid-state dopant source includes a plurality of regions separated by gaps. Regions of a second solid-state dopant source are formed above the substrate by printing.
Method or forming emitters for a back-contact solar cell
Li, Bo; Cousins, Peter J.; Smith, David D.
2014-08-12
Methods of forming emitters for back-contact solar cells are described. In one embodiment, a method includes forming a first solid-state dopant source above a substrate. The first solid-state dopant source includes a plurality of regions separated by gaps. Regions of a second solid-state dopant source are formed above the substrate by printing.
METHOD OF OPERATING A CALUTRON
Davidson, P.H.
1960-01-12
A method of operating an electromagnetic isotope separator of the calutron class is reported whereby uranium tetrachloride is produced at a controlled rate within the source rather than betng introduced therein as was formerly practiced. This is accomplished by placing a uranium-bearing material, such as uranium metal, uranium trichloride, or uranium carbide in the charge receptacle of the calutron, heating this material to about to produce uranium tetrachloride vapor at a rate controlled by the chlorine gas flow into the source. The vapor is subsequently ionized by an electric arc and mass separated by conventional calutron methods.
Non-destructive component separation using infrared radiant energy
Simandl, Ronald F [Knoxville, TN; Russell, Steven W [Knoxville, TN; Holt, Jerrid S [Knoxville, TN; Brown, John D [Harriman, TN
2011-03-01
A method for separating a first component and a second component from one another at an adhesive bond interface between the first component and second component. Typically the method involves irradiating the first component with infrared radiation from a source that radiates substantially only short wavelengths until the adhesive bond is destabilized, and then separating the first component and the second component from one another. In some embodiments an assembly of components to be debonded is placed inside an enclosure and the assembly is illuminated from an IR source that is external to the enclosure. In some embodiments an assembly of components to be debonded is simultaneously irradiated by a multi-planar array of IR sources. Often the IR radiation is unidirectional. In some embodiments the IR radiation is narrow-band short wavelength infrared radiation.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Source separation of household waste: a case study in China.
Zhuang, Ying; Wu, Song-Wei; Wang, Yun-Long; Wu, Wei-Xiang; Chen, Ying-Xu
2008-01-01
A pilot program concerning source separation of household waste was launched in Hangzhou, capital city of Zhejiang province, China. Detailed investigations on the composition and properties of household waste in the experimental communities revealed that high water content and high percentage of food waste are the main limiting factors in the recovery of recyclables, especially paper from household waste, and the main contributors to the high cost and low efficiency of waste disposal. On the basis of the investigation, a novel source separation method, according to which household waste was classified as food waste, dry waste and harmful waste, was proposed and performed in four selected communities. In addition, a corresponding household waste management system that involves all stakeholders, a recovery system and a mechanical dehydration system for food waste were constituted to promote source separation activity. Performances and the questionnaire survey results showed that the active support and investment of a real estate company and a community residential committee play important roles in enhancing public participation and awareness of the importance of waste source separation. In comparison with the conventional mixed collection and transportation system of household waste, the established source separation and management system is cost-effective. It could be extended to the entire city and used by other cities in China as a source of reference.
Mass transfer apparatus and method for separation of gases
Blount, Gerald C.
2015-10-13
A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.
Mass transfer apparatus and method for separation of gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blount, Gerald C.; Gorensek, Maximilian Boris; Hamm, Luther L.
A process and apparatus for separating components of a source gas is provided in which more soluble components of the source gas are dissolved in an aqueous solvent at high pressure. The system can utilize hydrostatic pressure to increase solubility of the components of the source gas. The apparatus includes gas recycle throughout multiple mass transfer stages to improve mass transfer of the targeted components from the liquid to gas phase. Separated components can be recovered for use in a value added application or can be processed for long-term storage, for instance in an underwater reservoir.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattisahusiwa, Asis; Liong, The Houw; Purqon, Acep
Seismo-Ionospheric is a study of ionosphere disturbances associated with seismic activities. In many previous researches, heliogeomagnetic or strong earthquake activities can caused the disturbances in the ionosphere. However, it is difficult to separate these disturbances based on related sources. In this research, we proposed a method to separate these disturbances/outliers by using nu-SVR with the world-wide GPS data. TEC data related to the 26th December 2004 Sumatra and the 11th March 2011 Honshu earthquakes had been analyzed. After analyzed TEC data in several location around the earthquake epicenter and compared with geomagnetic data, the method shows a good result inmore » the average to detect the source of these outliers. This method is promising to use in the future research.« less
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Ball, J.W.; Bassett, R.L.
2000-01-01
A method has been developed for separating the Cr dissolved in natural water from matrix elements and determination of its stable isotope ratios using solid-source thermal-ionization mass spectrometry (TIMS). The separation method takes advantage of the existence of the oxidized form of Cr as an oxyanion to separate it from interfering cations using anion-exchange chromatography, and of the reduced form of Cr as a positively charged ion to separate it from interfering anions such as sulfate. Subsequent processing of the separated sample eliminates residual organic material for application to a solid source filament. Ratios for 53Cr/52Cr for National Institute of Standards and Technology Standard Reference Material 979 can be measured using the silica gel-boric acid technique with a filament-to-filament standard deviation in the mean 53Cr/52Cr ratio for 50 replicates of 0.00005 or less. (C) 2000 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lo, Y. T.; Yen, H. Y.
2012-04-01
Taiwan is located at a complex juncture between the Eurasian and Philippine Sea plates. The mountains in Taiwan are very young, formed as a result of the collision between an island arc system and the Asian continental margin. To separate sources of gravity field in depth, a method is suggested, based on upward and downward continuation. Both new methods are applied to isolate the contribution of the Moho interface to the total field and to find its 3D topography. At the first stage, we separate near surface and deeper sources. At the next stage, we isolate the effect of very deep sources. After subtracting this field from the total effect of deeper sources, we obtain the contribution of the Moho interface. We make inversion separately for the area. In this study, we use the detail gravity data around this area to investigate the reliable subsurface density structure. First, we combine with land and marine gravity data to obtain gravity anomaly. Second, considering the geology, tomography and other constrains, we simulate the 3D density structure. The main goal of our study is to understand the Moho topography and sediment-crustal boundary in Taiwan area. We expect that our result can consistent with previous studies.
Removal of EOG Artifacts from EEG Recordings Using Stationary Subspace Analysis
Zeng, Hong; Song, Aiguo
2014-01-01
An effective approach is proposed in this paper to remove ocular artifacts from the raw EEG recording. The proposed approach first conducts the blind source separation on the raw EEG recording by the stationary subspace analysis (SSA) algorithm. Unlike the classic blind source separation algorithms, SSA is explicitly tailored to the understanding of distribution changes, where both the mean and the covariance matrix are taken into account. In addition, neither independency nor uncorrelation is required among the sources by SSA. Thereby, it can concentrate artifacts in fewer components than the representative blind source separation methods. Next, the components that are determined to be related to the ocular artifacts are projected back to be subtracted from EEG signals, producing the clean EEG data eventually. The experimental results on both the artificially contaminated EEG data and real EEG data have demonstrated the effectiveness of the proposed method, in particular for the cases where limited number of electrodes are used for the recording, as well as when the artifact contaminated signal is highly nonstationary and the underlying sources cannot be assumed to be independent or uncorrelated. PMID:24550696
METHOD OF SEPARATING ISOTOPES OF URANIUM IN A CALUTRON
Jenkins, F.A.
1958-05-01
Mass separation devices of the calutron type and the use of uranium hexachloride as a charge material in the calutron ion source are described. The method for using this material in a mass separator includes heating the uranium hexachloride to a temperature in the range of 60 to 100 d C in a vacuum and thereby forming a vapor of the material. The vaporized uranium hexachloride is then ionized in a vapor ionizing device for subsequent mass separation processing.
NASA Technical Reports Server (NTRS)
Greenwood, Eric II; Schmitz, Fredric H.
2009-01-01
A new method of separating the contributions of helicopter main and tail rotor noise sources is presented, making use of ground-based acoustic measurements. The method employs time-domain de-Dopplerization to transform the acoustic pressure time-history data collected from an array of ground-based microphones to the equivalent time-history signals observed by an array of virtual inflight microphones traveling with the helicopter. The now-stationary signals observed by the virtual microphones are then periodically averaged with the main and tail rotor once per revolution triggers. The averaging process suppresses noise which is not periodic with the respective rotor, allowing for the separation of main and tail rotor pressure time-histories. The averaged measurements are then interpolated across the range of directivity angles captured by the microphone array in order to generate separate acoustic hemispheres for the main and tail rotor noise sources. The new method is successfully applied to ground-based microphone measurements of a Bell 206B3 helicopter and demonstrates the strong directivity characteristics of harmonic noise radiation from both the main and tail rotors of that helicopter.
Direction of arrival estimation using blind separation of sources
NASA Astrophysics Data System (ADS)
Hirari, Mehrez; Hayakawa, Masashi
1999-05-01
The estimation of direction of arrival (DOA) and polarization of an incident electromagnetic (EM) wave is of great importance in many applications. In this paper we propose a new approach for the estimation of DOA for polarized EM waves using blind separation of sources. In this approach we use a vector sensor, a sensor whose output is a complete set of the EM field components of the irradiating wave, and we reconstruct the waveforms of all the original signals that is, all the EM components of the sources' fields. From the waveform of each source we calculate its amplitude and phase and consequently calculate its DOA and polarization using the field analysis method. The separation of sources is conducted iteratively using a recurrent Hopfield-like single-layer neural network. The simulation results for two sources have been investigated. We have considered coherent and incoherent sources and also the case of varying DOAs vis-ā-vis the sensor and a varying polarization. These are cases seldom treated by other approaches even though they exist in real-world applications. With the proposed method we have obtained almost on-time tracking for the DOA and polarization of any incident sources with a significant reduction of both memory and computation costs.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Warmerdam, G; Vullings, R; Van Pul, C; Andriessen, P; Oei, S G; Wijn, P
2013-01-01
Non-invasive fetal electrocardiography (ECG) can be used for prolonged monitoring of the fetal heart rate (FHR). However, the signal-to-noise-ratio (SNR) of non-invasive ECG recordings is often insufficient for reliable detection of the FHR. To overcome this problem, source separation techniques can be used to enhance the fetal ECG. This study uses a physiology-based source separation (PBSS) technique that has already been demonstrated to outperform widely used blind source separation techniques. Despite the relatively good performance of PBSS in enhancing the fetal ECG, PBSS is still susceptible to artifacts. In this study an augmented PBSS technique is developed to reduce the influence of artifacts. The performance of the developed method is compared to PBSS on multi-channel non-invasive fetal ECG recordings. Based on this comparison, the developed method is shown to outperform PBSS for the enhancement of the fetal ECG.
NASA Astrophysics Data System (ADS)
Jany, B. R.; Janas, A.; Krok, F.
2017-11-01
The quantitative composition of metal alloy nanowires on InSb(001) semiconductor surface and gold nanostructures on germanium surface is determined by blind source separation (BSS) machine learning (ML) method using non negative matrix factorization (NMF) from energy dispersive X-ray spectroscopy (EDX) spectrum image maps measured in a scanning electron microscope (SEM). The BSS method blindly decomposes the collected EDX spectrum image into three source components, which correspond directly to the X-ray signals coming from the supported metal nanostructures, bulk semiconductor signal and carbon background. The recovered quantitative composition is validated by detailed Monte Carlo simulations and is confirmed by separate cross-sectional TEM EDX measurements of the nanostructures. This shows that SEM EDX measurements together with machine learning blind source separation processing could be successfully used for the nanostructures quantitative chemical composition determination.
Noise-Source Separation Using Internal and Far-Field Sensors for a Full-Scale Turbofan Engine
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Miles, Jeffrey H.
2009-01-01
Noise-source separation techniques for the extraction of the sub-dominant combustion noise from the total noise signatures obtained in static-engine tests are described. Three methods are applied to data from a static, full-scale engine test. Both 1/3-octave and narrow-band results are discussed. The results are used to assess the combustion-noise prediction capability of the Aircraft Noise Prediction Program (ANOPP). A new additional phase-angle-based discriminator for the three-signal method is also introduced.
White, David J.; Congedo, Marco; Ciorciari, Joseph
2014-01-01
A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID:25374520
Method for the chemical separation of GE-68 from its daughter Ga-68
Fitzsimmons, Jonathan M.; Atcher, Robert W.
2010-06-01
The present invention is directed to a generator apparatus for separating a daughter gallium-68 radioisotope substantially free of impurities from a parent gernanium-68 radioisotope, including a first resin-containing column containing parent gernanium-68 radioisotope and daughter gallium-68 radioisotope, a source of first eluent connected to said first resin-containing column for separating daughter gallium-68 radioisotope from the first resin-containing column, said first eluent including citrate whereby the separated gallium is in the form of gallium citrate, a mixing space connected to said first resin-containing column for admixing a source of hydrochloric acid with said separated gallium citrate whereby gallium citrate is converted to gallium tetrachloride, a second resin-containing column for retention of gallium-68 tetrachloride, and, a source of second eluent connected to said second resin-containing column for eluting the daughter gallium-68 radioisotope from said second resin-containing column.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
NASA Astrophysics Data System (ADS)
Manicke, Nicholas E.; Belford, Michael
2015-05-01
One limitation in the growing field of ambient or direct analysis methods is reduced selectivity caused by the elimination of chromatographic separations prior to mass spectrometric analysis. We explored the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS), an ambient pressure ion mobility technique, to separate the closely related opiate isomers of morphine, hydromorphone, and norcodeine. These isomers cannot be distinguished by tandem mass spectrometry. Separation prior to MS analysis is, therefore, required to distinguish these compounds, which are important in clinical chemistry and toxicology. FAIMS was coupled to a triple quadrupole mass spectrometer, and ionization was performed using either a pneumatically assisted heated electrospray ionization source (H-ESI) or paper spray, a direct analysis method that has been applied to the direct analysis of dried blood spots and other complex samples. We found that FAIMS was capable of separating the three opiate structural isomers using both H-ESI and paper spray as the ionization source.
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
NASA Astrophysics Data System (ADS)
Han, Guang; Liu, Jin; Liu, Rong; Xu, Kexin
2016-10-01
Position-based reference measurement method is taken as one of the most promising method in non-invasive measurement of blood glucose based on spectroscopic methodology. Selecting an appropriate source-detector separation as the reference position is important for deducting the influence of background change and reducing the loss of useful signals. Our group proposed a special source-detector separation named floating-reference position where the signal contains only background change, that is to say, the signal at this source-detector separation is uncorrelated with glucose concentration. The existence of floating-reference position has been verified in a three layer skin by Monte Carlo simulation and in the in vitro experiment. But it is difficult to verify the existence of floating-reference position on the human body because the interference is more complex during in vivo experiment. Aiming at this situation, this paper studies the determination of the best reference position on human body by collecting signals at several source-detector separations on the palm and measuring the true blood glucose levels during oral glucose tolerance test (OGTT) experiments of 3 volunteers. Partial least square (PLS) calibration model is established between the signals at every source-detector separation and its corresponding blood glucose levels. The results shows that the correlation coefficient (R) between 1.32 mm to 1.88 mm is lowest and they can be used as reference for background correction. The signal of this special position is important for improving the accuracy of near-infrared non-invasive blood glucose measurement.
Rigamonti, L; Grosso, M; Giugliano, M
2009-02-01
This life cycle assessment study analyses material and energy recovery within integrated municipal solid waste (MSW) management systems, and, in particular, the recovery of the source-separated materials (packaging and organic waste) and the energy recovery from the residual waste. The recovery of materials and energy are analysed together, with the final aim to evaluate possible optimum levels of source-separated collection that lead to the most favourable energetic and environmental results; this method allows identification of an optimum configuration of the MSW management system. The results show that the optimum level of source-separated collection is about 60%, when all the materials are recovered with high efficiency; it decreases to about 50%, when the 60% level is reached as a result of a very high recovery efficiency for organic fractions at the expense of the packaging materials, or when this implies an appreciable reduction of the quality of collected materials. The optimum MSW management system is thus characterized by source-separated collection levels as included in the above indicated range, with subsequent recycling of the separated materials and energy recovery of the residual waste in a large-scale incinerator operating in combined heat and power mode.
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Celis, R; Romo, D; Romero, E
2015-12-01
Blind source separation methods aim to split information into the original sources. In histology, each dye component attempts to specifically characterize different microscopic structures. In the case of the hematoxylin-eosin stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Stain separation is usually a preprocessing operation that is transversal to different applications. This paper presents a novel colour separation method that finds the hematoxylin and eosin clusters by projecting the whole (r,g,b) space to a folded surface connecting the distributions of a series of [(r-b),g] planes that divide the cloud of H&E tones. The proposed method produces density maps closer to those obtained with the colour mixing matrices set by an expert, when comparing with the density maps obtained using nonnegative matrix factorization (NMF), independent component analysis (ICA) and a state-of-the-art method. The method has outperformed three baseline methods, NMF, Macenko and ICA, in about 8%, 12% and 52% for the eosin component, whereas this was about 4%, 8% and 26% for the hematoxylin component. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, John Howard; Alvare, Javier
A reactor has two chambers, namely an oil feedstock chamber and a source chamber. An ion separator separates the oil feedstock chamber from the source chamber, wherein the ion separator allows alkali metal ions to pass from the source chamber, through the ion separator, and into the oil feedstock chamber. A cathode is at least partially housed within the oil feedstock chamber and an anode is at least partially housed within the source chamber. A quantity of an oil feedstock is within the oil feedstock chamber, the oil feedstock comprising at least one carbon atom and a heteroatom and/or onemore » or more heavy metals, the oil feedstock further comprising naphthenic acid. When the alkali metal ion enters the oil feedstock chamber, the alkali metal reacts with the heteroatom, the heavy metals and/or the naphthenic acid, wherein the reaction with the alkali metal forms inorganic products.« less
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.
Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG
Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397
Blind Source Separation of Seismic Events with Independent Component Analysis: CTBT related exercise
NASA Astrophysics Data System (ADS)
Rozhkov, Mikhail; Kitov, Ivan
2015-04-01
Blind Source Separation (BSS) methods used in signal recovery applications are attractive for they use minimal a priori information about the signals they are dealing with. Homomorphic deconvolution and cepstrum estimation are probably the only methods used in certain extent in CTBT applications that can be attributed to the given branch of technology. However Expert Technical Analysis (ETA) conducted in CTBTO to improve the estimated values for the standard signal and event parameters according to the Protocol to the CTBT may face problems which cannot be resolved with certified CTBTO applications and may demand specific techniques not presently used. The problem to be considered within the ETA framework is the unambiguous separation of signals with close arrival times. Here, we examine two scenarios of interest: (1) separation of two almost co-located explosions conducted within fractions of seconds, and (2) extraction of explosion signals merged with wavetrains from strong earthquake. The importance of resolving the problem related to case 1 is connected with the correct explosion yield estimation. Case 2 is a well-known scenario of conducting clandestine nuclear tests. While the first case can be approached somehow with the means of cepstral methods, the second case can hardly be resolved with the conventional methods implemented at the International Data Centre, especially if the signals have close slowness and azimuth. Independent Component Analysis (in its FastICA implementation) implying non-Gaussianity of the underlying processes signal's mixture is a blind source separation method that we apply to resolve the mentioned above problems. We have tested this technique with synthetic waveforms, seismic data from DPRK explosions and mining blasts conducted within East-European platform as well as with signals from strong teleseismic events (Sumatra, April 2012 Mw=8.6, and Tohoku, March 2011 Mw=9.0 earthquakes). The data was recorded by seismic arrays of the International Monitoring System of CTBTO and by small-aperture seismic array Mikhnevo (MHVAR) operated by the Institute of Geosphere Dynamics, Russian Academy of Sciences. Our approach demonstrated a good ability of separation of seismic sources with very close origin times and locations (hundreds of meters), and/or having close arrival times (fractions of seconds), and recovering their waveforms from the mixture. Perspectives and limitations of the method are discussed.
Plasma plume MHD power generator and method
Hammer, J.H.
1993-08-10
A method is described of generating power at a situs exposed to the solar wind which comprises creating at separate sources at the situs discrete plasma plumes extending in opposed directions, providing electrical communication between the plumes at their source and interposing a desired electrical load in the said electrical communication between the plumes.
NASA Astrophysics Data System (ADS)
Subudhi, Sudhakar; Sreenivas, K. R.; Arakeri, Jaywant H.
2013-01-01
This work is concerned with the removal of unwanted fluid through the source-sink pair. The source consists of fluid issuing out of a nozzle in the form of a jet and the sink is a pipe that is kept some distance from the source pipe. Of concern is the percentage of source fluid sucked through the sink. The experiments have been carried in a large glass water tank. The source nozzle diameter is 6 mm and the sink pipe diameter is either 10 or 20 mm. The horizontal and vertical separations and angles between these source and sink pipes are adjustable. The flow was visualized using KMnO4 dye, planer laser induced fluorescence and particle streak photographs. To obtain the effectiveness (that is percentage of source fluid entering the sink pipe), titration method is used. The velocity profiles with and without the sink were obtained using particle image velocimetry. The sink flow rate to obtain a certain effectiveness increase dramatically with lateral separation. The sink diameter and the angle between source and the sink axes don't influence effectiveness as much as the lateral separation.
Method for sequencing DNA base pairs
Sessler, Andrew M.; Dawson, John
1993-01-01
The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.
Method and apparatus for controlling carrier envelope phase
Chang, Zenghu [Manhattan, KS; Li, Chengquan [Sunnyvale, CA; Moon, Eric [Manhattan, KS
2011-12-06
A chirped pulse amplification laser system. The system generally comprises a laser source, a pulse modification apparatus including first and second pulse modification elements separated by a separation distance, a positioning element, a measurement device, and a feedback controller. The laser source is operable to generate a laser pulse and the pulse modification apparatus operable to modify at least a portion of the laser pulse. The positioning element is operable to reposition at least a portion of the pulse modification apparatus to vary the separation distance. The measurement device is operable to measure the carrier envelope phase of the generated laser pulse and the feedback controller is operable to control the positioning element based on the measured carrier envelope phase to vary the separation distance of the pulse modification elements and control the carrier envelope phase of laser pulses generated by the laser source.
Blind source separation problem in GPS time series
NASA Astrophysics Data System (ADS)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2016-04-01
A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition techniques in explaining the data and in recovering the original (known) sources. Using the same number of components, we find that the vbICA method fits the data almost as well as a PCA method, since the χ 2 increase is less than 10 % the value calculated using a PCA decomposition. Unlike PCA, the vbICA algorithm is found to correctly separate the sources if the correlation of the dataset is low (<0.67) and the geodetic network is sufficiently dense (ten continuous GPS stations within a box of side equal to two times the locking depth of a fault where an earthquake of Mw >6 occurred). We also provide a cookbook for the use of the vbICA algorithm in analyses of position time series for tectonic and non-tectonic applications.
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.
Method for isotope separation by photodeflection
Bernhardt, Anthony F.
1977-01-01
In the method of separating isotopes wherein a desired isotope species is selectively deflected out of a beam of mixed isotopes by irradiating the beam with a directed beam of light of narrowly defined frequency which is selectively absorbed by the desired species, the improvement comprising irradiating the deflected beam with light from other light sources whose frequencies are selected to cause the depopulation of any metastable excited states.
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert
2016-07-01
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.
Seismoelectric imaging of shallow targets
Haines, S.S.; Pride, S.R.; Klemperer, S.L.; Biondi, B.
2007-01-01
We have undertaken a series of controlled field experiments to develop seismoelectric experimental methods for near-surface applications and to improve our understanding of seismoelectric phenomena. In a set of off-line geometry surveys (source separated from the receiver line), we place seismic sources and electrode array receivers on opposite sides of a man-made target (two sand-filled trenches) to record separately two previously documented seismoelectric modes: (1) the electromagnetic interface response signal created at the target and (2) the coseismic electric fields located within a compressional seismic wave. With the seismic source point in the center of a linear electrode array, we identify the previously undocumented seismoelectric direct field, and the Lorentz field of the metal hammer plate moving in the earth's magnetic field. We place the seismic source in the center of a circular array of electrodes (radial and circumferential orientations) to analyze the source-related direct and Lorentz fields and to establish that these fields can be understood in terms of simple analytical models. Using an off-line geometry, we create a multifold, 2D image of our trenches as dipping layers, and we also produce a complementary synthetic image through numerical modeling. These images demonstrate that off-line geometry (e.g., crosswell) surveys offer a particularly promising application of the seismoelectric method because they effectively separate the interface response signal from the (generally much stronger) coseismic and source-related fields. ?? 2007 Society of Exploration Geophysicists.
Reduced-Stress Mounting for Thermocouples
NASA Technical Reports Server (NTRS)
Wood, C.
1986-01-01
Mounting accommodates widely different coefficients of thermal expansion. In new method, legs of thermocouple placed in separate n- and p-type arrays. Two arrays contact common heat pipe as source but have separate heatpipe sinks. Net expansion (or contraction) taken up by spring mounting on heat-pipe sinks.
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Iterative algorithm for joint zero diagonalization with application in blind source separation.
Zhang, Wei-Tao; Lou, Shun-Tian
2011-07-01
A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Method for sequencing DNA base pairs
Sessler, A.M.; Dawson, J.
1993-12-14
The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.
Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka
2017-09-01
Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Auto white balance method using a pigmentation separation technique for human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2017-02-01
The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.
An analysis method for multi-component airfoils in separated flow
NASA Technical Reports Server (NTRS)
Rao, B. M.; Duorak, F. A.; Maskew, B.
1980-01-01
The multi-component airfoil program (Langley-MCARF) for attached flow is modified to accept the free vortex sheet separation-flow model program (Analytical Methods, Inc.-CLMAX). The viscous effects are incorporated into the calculation by representing the boundary layer displacement thickness with an appropriate source distribution. The separation flow model incorporated into MCARF was applied to single component airfoils. Calculated pressure distributions for angles of attack up to the stall are in close agreement with experimental measurements. Even at higher angles of attack beyond the stall, correct trends of separation, decrease in lift coefficients, and increase in pitching moment coefficients are predicted.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Kronholm, Scott C.; Capel, Paul D.
2015-01-01
Quantifying the relative contributions of different sources of water to a stream hydrograph is important for understanding the hydrology and water quality dynamics of a given watershed. To compare the performance of two methods of hydrograph separation, a graphical program [baseflow index (BFI)] and an end-member mixing analysis that used high-resolution specific conductance measurements (SC-EMMA) were used to estimate daily and average long-term slowflow additions of water to four small, primarily agricultural streams with different dominant sources of water (natural groundwater, overland flow, subsurface drain outflow, and groundwater from irrigation). Because the result of hydrograph separation by SC-EMMA is strongly related to the choice of slowflow and fastflow end-member values, a sensitivity analysis was conducted based on the various approaches reported in the literature to inform the selection of end-members. There were substantial discrepancies among the BFI and SC-EMMA, and neither method produced reasonable results for all four streams. Streams that had a small difference in the SC of slowflow compared with fastflow or did not have a monotonic relationship between streamflow and stream SC posed a challenge to the SC-EMMA method. The utility of the graphical BFI program was limited in the stream that had only gradual changes in streamflow. The results of this comparison suggest that the two methods may be quantifying different sources of water. Even though both methods are easy to apply, they should be applied with consideration of the streamflow and/or SC characteristics of a stream, especially where anthropogenic water sources (irrigation and subsurface drainage) are present.
Hepburn, Emily; Northway, Anne; Bekele, Dawit; Liu, Gang-Jun; Currell, Matthew
2018-06-11
Determining sources of heavy metals in soils, sediments and groundwater is important for understanding their fate and transport and mitigating human and environmental exposures. Artificially imported fill, natural sediments and groundwater from 240 ha of reclaimed land at Fishermans Bend in Australia, were analysed for heavy metals and other parameters to determine the relative contributions from different possible sources. Fishermans Bend is Australia's largest urban re-development project, however, complicated land-use history, geology, and multiple contamination sources pose challenges to successful re-development. We developed a method for heavy metal source separation in groundwater using statistical categorisation of the data, analysis of soil leaching values and fill/sediment XRF profiling. The method identified two major sources of heavy metals in groundwater: 1. Point sources from local or up-gradient groundwater contaminated by industrial activities and/or legacy landfills; and 2. contaminated fill, where leaching of Cu, Mn, Pb and Zn was observed. Across the precinct, metals were most commonly sourced from a combination of these sources; however, eight locations indicated at least one metal sourced solely from fill leaching, and 23 locations indicated at least one metal sourced solely from impacted groundwater. Concentrations of heavy metals in groundwater ranged from 0.0001 to 0.003 mg/L (Cd), 0.001-0.1 mg/L (Cr), 0.001-0.2 mg/L (Cu), 0.001-0.5 mg/L (Ni), 0.001-0.01 mg/L (Pb), and 0.005-1.2 mg/L (Zn). Our method can determine the likely contribution of different metal sources to groundwater, helping inform more detailed contamination assessments and precinct-wide management and remediation strategies. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Identification of Dens Draconis and Os Draconis by XRD method].
Chen, Guang-Yun; Wu, Qi-Nan; Shen, Bei; Chen, Rong
2012-04-01
To establish an XRD method for evaluating the quality of Os Draconis and Dens Draconis and applying in judgement of the counterfeit. Dens Draconis, Os Draconis and the counterfeit of Os Draconis were analyzed by XRD. Their diffraction patterns were clustered analysis and evaluated their similarity degree. Established the analytical method of Dens Draconis and Os Draconis basing the features fingerprint information of the 10 common peaks by XRD pattern. Obtained the XRD pattern of the counterfeit of Os Draconis. The similarity degree of separate sources of Dens Draconis was high,while the similarity degree of separate sources of Os Draconis was significant different from each other. This method can be used for identification and evaluation of Os Draconis and Dens Draconis. It also can be used for identification the counterfeit of Os Draconis effectively.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Methods of producing cesium-131
Meikrantz, David H; Snyder, John R
2012-09-18
Methods of producing cesium-131. The method comprises dissolving at least one non-irradiated barium source in water or a nitric acid solution to produce a barium target solution. The barium target solution is irradiated with neutron radiation to produce cesium-131, which is removed from the barium target solution. The cesium-131 is complexed with a calixarene compound to separate the cesium-131 from the barium target solution. A liquid:liquid extraction device or extraction column is used to separate the cesium-131 from the barium target solution.
Method of separation of yttrium-90 from strontium-90
Bray, Lane A.; Wester, Dennis W.
1996-01-01
A method for purifying Y-90 from a Sr-90/Y-90 "cow" wherein raw Sr-90/Y-90 source containing impurities is obtained from nuclear material reprocessing. Raw Sr-90/Y-90 source is purified to a fresh Sr-90/Y-90 source "cow" by removing impurities by addition of sodium hydroxide and by removing Cs-137 by further addition of sodium carbonate. The "cow" is set aside to allow ingrowth. An HDEHP organic extractant is obtained from a commercial supplier and further purified by saturation with Cu(II), precipitation with acetone, and washing with nitric acid. The "cow" is then dissolved in nitric acid and the purified HDEHP is washed with nitric acid and scrubbed with either nitric or hydrochloric acid. The dissolved "cow" and scrubbed HDEHP are combined in an organic extraction, separating Y-90 from Sr-90, resulting in a Sr-90/Y-90 concentration ratio of not more than 10(E-7), and a metal impurity concentration of not more than 10 ppm per curie of Y-90. The separated Y-90 may then be prepared for delivery.
Method of separation of yttrium-90 from strontium-90
Bray, L.A.; Wester, D.W.
1996-04-30
A method is described for purifying Y-90 from a Sr-90/Y-90 ``cow`` wherein raw Sr-90/Y-90 source containing impurities is obtained from nuclear material reprocessing. Raw Sr-90/Y-90 source is purified to a fresh Sr-90/Y-90 source ``cow`` by removing impurities by addition of sodium hydroxide and by removing Cs-137 by further addition of sodium carbonate. The ``cow`` is set aside to allow ingrowth. An HDEHP organic extractant is obtained from a commercial supplier and further purified by saturation with Cu(II), precipitation with acetone, and washing with nitric acid. The ``cow`` is then dissolved in nitric acid and the purified HDEHP is washed with nitric acid and scrubbed with either nitric or hydrochloric acid. The dissolved ``cow`` and scrubbed HDEHP are combined in an organic extraction, separating Y-90 from Sr-90, resulting in a Sr-90/Y-90 concentration ratio of not more than 10(E-7), and a metal impurity concentration of not more than 10 ppm per curie of Y-90. The separated Y-90 may then be prepared for delivery. 1 fig.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
Ishii, Stephanie K L; Boyer, Treavor H
2015-08-01
Alternative approaches to wastewater management including urine source separation have the potential to simultaneously improve multiple aspects of wastewater treatment, including reduced use of potable water for waste conveyance and improved contaminant removal, especially nutrients. In order to pursue such radical changes, system-level evaluations of urine source separation in community contexts are required. The focus of this life cycle assessment (LCA) is managing nutrients from urine produced in a residential setting with urine source separation and struvite precipitation, as compared with a centralized wastewater treatment approach. The life cycle impacts evaluated in this study pertain to construction of the urine source separation system and operation of drinking water treatment, decentralized urine treatment, and centralized wastewater treatment. System boundaries include fertilizer offsets resulting from the production of urine based struvite fertilizer. As calculated by the Tool for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI), urine source separation with MgO addition for subsequent struvite precipitation with high P recovery (Scenario B) has the smallest environmental cost relative to existing centralized wastewater treatment (Scenario A) and urine source separation with MgO and Na3PO4 addition for subsequent struvite precipitation with concurrent high P and N recovery (Scenario C). Preliminary economic evaluations show that the three urine management scenarios are relatively equal on a monetary basis (<13% difference). The impacts of each urine management scenario are most sensitive to the assumed urine composition, the selected urine storage time, and the assumed electricity required to treat influent urine and toilet water used to convey urine at the centralized wastewater treatment plant. The importance of full nutrient recovery from urine in combination with the substantial chemical inputs required for N recovery via struvite precipitation indicate the need for alternative methods of N recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Detection of Partial Discharge Sources Using UHF Sensors and Blind Signal Separation
Boya, Carlos; Parrado-Hernández, Emilio
2017-01-01
The measurement of the emitted electromagnetic energy in the UHF region of the spectrum allows the detection of partial discharges and, thus, the on-line monitoring of the condition of the insulation of electrical equipment. Unfortunately, determining the affected asset is difficult when there are several simultaneous insulation defects. This paper proposes the use of an independent component analysis (ICA) algorithm to separate the signals coming from different partial discharge (PD) sources. The performance of the algorithm has been tested using UHF signals generated by test objects. The results are validated by two automatic classification techniques: support vector machines and similarity with class mean. Both methods corroborate the suitability of the algorithm to separate the signals emitted by each PD source even when they are generated by the same type of insulation defect. PMID:29140267
The current status of the MASHA setup
NASA Astrophysics Data System (ADS)
Vedeneev, V. Yu.; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D.; Kliman, J.; Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A.; Granja, C.; Pospisil, S.
2017-11-01
The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction 48Ca+242Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Method for Monitored Separation and Collection of Biological Materials
NASA Technical Reports Server (NTRS)
Fox, George Edward (Inventor); Jackson, George William (Inventor); Willson, Richard Coale (Inventor)
2014-01-01
A device for separating and purifying useful quantities of particles comprises: (a) an anolyte reservoir connected to an anode, the anolyte reservoir containing an electrophoresis buffer; (b) a catholyte reservoir connected to a cathode, the catholyte reservoir also containing the electrophoresis buffer; (c) a power supply connected to the anode and to the cathode; (d) a column having a first end inserted into the anolyte reservoir, a second end inserted into the catholyte reservoir, and containing a separation medium; (e) a light source; (f) a first optical fiber having a first fiber end inserted into the separation medium, and having a second fiber end connected to the light source; (g) a photo detector; (h) a second optical fiber having a third fiber end inserted into the separation medium, and having a fourth fiber end connected to the photo detector; and (i) an ion-exchange membrane in the anolyte reservoir.
Apparatus And Methods For Launching And Receiving A Broad Wavelength Range Source
Von Drasek, William A.; Sonnenfroh, David; Allen, Mark G.; Stafford-Evans, Joy
2006-02-28
An apparatus and method for simultaneous detection of N gas species through laser radiation attenuation techniques is disclosed. Each of the N species has a spectral absorption band. N laser sources operate at a wavelength ?N in a spectral absorption band separated by the cutoff wavelength for single-mode transmission. Each laser source corresponds to a gas species and transmits radiation through an optical fiber constructed and arranged to provide single-mode transmission with minimal power loss.
Imaging method for monitoring delivery of high dose rate brachytherapy
Weisenberger, Andrew G; Majewski, Stanislaw
2012-10-23
A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.
NASA Astrophysics Data System (ADS)
Milej, Daniel; Janusek, Dariusz; Gerega, Anna; Wojtkiewicz, Stanislaw; Sawosz, Piotr; Treszczanowicz, Joanna; Weigl, Wojciech; Liebert, Adam
2015-10-01
The aim of the study was to determine optimal measurement conditions for assessment of brain perfusion with the use of optical contrast agent and time-resolved diffuse reflectometry in the near-infrared wavelength range. The source-detector separation at which the distribution of time of flights (DTOF) of photons provided useful information on the inflow of the contrast agent to the intracerebral brain tissue compartments was determined. Series of Monte Carlo simulations was performed in which the inflow and washout of the dye in extra- and intracerebral tissue compartments was modeled and the DTOFs were obtained at different source-detector separations. Furthermore, tests on diffuse phantoms were carried out using a time-resolved setup allowing the measurement of DTOFs at 16 source-detector separations. Finally, the setup was applied in experiments carried out on the heads of adult volunteers during intravenous injection of indocyanine green. Analysis of statistical moments of the measured DTOFs showed that the source-detector separation of 6 cm is recommended for monitoring of inflow of optical contrast to the intracerebral brain tissue compartments with the use of continuous wave reflectometry, whereas the separation of 4 cm is enough when the higher-order moments of DTOFs are available.
Method for separating mono- and di-octylphenyl phosphoric acid esters
Arnold, Jr., Wesley D.
1977-01-01
A method for separating mono-octylphenyl phosphoric acid ester and di-octylphenyl phosphoric acid ester from a mixture thereof comprises reacting the ester mixture with a source of lithium or sodium ions to form a mixture of the phosphate salts; contacting the salt mixture with an organic solvent which causes the dioctylphenyl phosphate salt to be dissolved in the organic solvent phase and the mono-octylphenyl phosphate salt to exist in a solid phase; separating the phases; recovering the phosphate salts from their respective phases; and acidifying the recovered salts to form the original phosphoric acid esters.
Dual x-ray fluorescence spectrometer and method for fluid analysis
Wilson, Bary W.; Shepard, Chester L.
2005-02-22
Disclosed are an X-ray fluorescence (SRF) spectrometer and method for on-site and in-line determination of contaminant elements in lubricating oils and in fuel oils on board a marine vessel. An XRF source block 13 contains two radionuclide sources 16, 17 (e.g. Cd 109 and Fe 55), each oriented 180 degrees from the other to excite separate targets. The Cd 109 source 16 excites sample lube oil flowing through a low molecular weight sample line 18. The Fe 55 source 17 excites fuel oil manually presented to the source beam inside a low molecular weight vial 26 or other container. Two separate detectors A and B are arranged to detect the fluorescent x-rays from the targets, photons from the analyte atoms in the lube oil for example, and sulfur identifying x-rays from bunker fuel oil for example. The system allows both automated in-line and manual on-site analysis using one set of signal processing and multi-channel analyzer electronics 34, 37 as well as one computer 39 and user interface 43.
Computation of viscous flows over airfoils, including separation, with a coupling approach
NASA Technical Reports Server (NTRS)
Leballeur, J. C.
1983-01-01
Viscous incompressible flows over single or multiple airfoils, with or without separation, were computed using an inviscid flow calculation, with modified boundary conditions, and by a method providing calculation and coupling for boundary layers and wakes, within conditions of strong viscous interaction. The inviscid flow is calculated with a method of singularities, the numerics of which were improved by using both source and vortex distributions over profiles, associated with regularity conditions for the fictitious flows inside of the airfoils. The viscous calculation estimates the difference between viscous flow and inviscid interacting flow, with a direct or inverse integral method, laminar or turbulent, with or without reverse flow. The numerical method for coupling determines iteratively the boundary conditions for the inviscid flow. For attached viscous layers regions, an underrelaxation is locally calculated to insure stability. For separated or separating regions, a special semi-inverse algorithm is used. Comparisons with experiments are presented.
Borchardt, M A; Spencer, S K; Bertz, P D; Ware, M W; Dubey, J P; Alan Lindquist, H D
2009-10-01
To evaluate the effectiveness of continuous separation channel centrifugation for concentrating Toxoplasma gondii and Cyclospora cayetanensis from drinking water and environmental waters. Ready-to-seed vials with known quantities of T. gondii and C. cayetanensis oocysts were prepared by flow cytometry. Oocysts were seeded at densities ranging from 1 to 1000 oocysts l(-1) into 10 to 100 l test volumes of finished drinking water, water with manipulated turbidity, and the source waters from nine drinking water utilities. Oocysts were recovered using continuous separation channel centrifugation and counted on membrane filters using epifluorescent microscopy. Recovery efficiencies of both parasites were > or =84% in 10 l volumes of drinking water. In source waters, recoveries ranged from 64% to 100%, with the lowest recoveries in the most turbid waters. Method precision was between 10% and 20% coefficient of variation. Toxoplasma gondii and C. cayetanensis are effectively concentrated from various water matrices by continuous separation channel centrifugation. Waterborne transmission of T. gondii and C. cayetanensis presents another challenge in producing clean drinking water and protecting public health. Detection of these parasites relies on effectively concentrating oocysts from ambient water, otherwise false negatives may result. Validation data specific to T. gondii and C. cayetanensis concentration methods are limited. Continuous separation channel centrifugation recovers oocysts with high efficiency and precision, the method attributes required to accurately assess the risk of waterborne transmission.
A review of multivariate methods in brain imaging data fusion
NASA Astrophysics Data System (ADS)
Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.
2010-03-01
On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.
Blind separation of positive sources by globally convergent gradient search.
Oja, Erkki; Plumbley, Mark
2004-09-01
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305
Recent developments in optical detection methods for microchip separations.
Götz, Sebastian; Karst, Uwe
2007-01-01
This paper summarizes the features and performances of optical detection systems currently applied in order to monitor separations on microchip devices. Fluorescence detection, which delivers very high sensitivity and selectivity, is still the most widely applied method of detection. Instruments utilizing laser-induced fluorescence (LIF) and lamp-based fluorescence along with recent applications of light-emitting diodes (LED) as excitation sources are also covered in this paper. Since chemiluminescence detection can be achieved using extremely simple devices which no longer require light sources and optical components for focusing and collimation, interesting approaches based on this technique are presented, too. Although UV/vis absorbance is a detection method that is commonly used in standard desktop electrophoresis and liquid chromatography instruments, it has not yet reached the same level of popularity for microchip applications. Current applications of UV/vis absorbance detection to microchip separations and innovative approaches that increase sensitivity are described. This article, which contains 85 references, focuses on developments and applications published within the last three years, points out exciting new approaches, and provides future perspectives on this field.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Solid Phase Extraction (SPE) for Biodiesel Processing and Analysis
2017-12-13
1 METHODS ...sources. There are several methods than can be applied to development of separation techniques that may replace necessary water wash steps in...biodiesel refinement. Unfortunately, the most common methods are poorly suited or face high costs when applied to diesel purification. Distillation is
Systematic study of target localization for bioluminescence tomography guided radiation therapy
Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.
2016-01-01
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371
ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.
Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi
2009-06-15
In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afroz, Rafia, E-mail: rafia_afroz@yahoo.com; Masud, Muhammad Mehedi
2011-04-15
This study employed contingent valuation method to estimate the willingness to pay (WTP) of the households to improve the waste collection system in Kuala Lumpur, Malaysia. The objective of this study is to evaluate how household WTP changes when recycling and waste separation at source is made mandatory. The methodology consisted of asking people directly about their WTP for an additional waste collection service charge to cover the costs of a new waste management project. The new waste management project consisted of two versions: version A (recycling and waste separation is mandatory) and version B (recycling and waste separation ismore » not mandatory). The households declined their WTP for version A when they were asked to separate the waste at source although all the facilities would be given to them for waste separation. The result of this study indicates that the households were not conscious about the benefits of recycling and waste separation. Concerted efforts should be taken to raise environmental consciousness of the households through education and more publicity regarding waste separation, reducing and recycling.« less
NASA Astrophysics Data System (ADS)
Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle
2018-05-01
Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.
Low-fat Milk Consumption among Children and Adolescents in the United States, 2007-2008
... separately reported for this group. Data source and methods Data from the National Health and Nutrition Examination ... percentages were estimated using Taylor series linearization, a method that incorporates the sample weights and sample design. ...
Iterative deblending of simultaneous-source data using a coherency-pass shaping operator
NASA Astrophysics Data System (ADS)
Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang
2017-10-01
Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.
Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.
Huang, Hong; Zhang, Baifa; Lu, Jun
2014-01-01
We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi
2014-01-01
A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644
Radtke, Robert P; Stokes, Robert H; Glowka, David A
2014-12-02
A method for operating an impulsive type seismic energy source in a firing sequence having at least two actuations for each seismic impulse to be generated by the source. The actuations have a time delay between them related to a selected energy frequency peak of the source output. One example of the method is used for generating seismic signals in a wellbore and includes discharging electric current through a spark gap disposed in the wellbore in at least one firing sequence. The sequence includes at least two actuations of the spark gap separated by an amount of time selected to cause acoustic energy resulting from the actuations to have peak amplitude at a selected frequency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chaochao; Duan, Jicheng; Liu, Tao
Human biofluids, especially blood plasma or serum, hold great potential as the sources of candidate biomarkers for various diseases; however, the enormous dynamic range of protein concentrations in biofluids represents a significant analytical challenge for detecting promising low-abundance proteins. Over the last decade, various immunoaffinity chromatographic methods have been developed and routinely applied for separating low-abundance proteins from the high- and moderate-abundance proteins, thus enabling much more effective detection of low-abundance proteins. Herein, we review the advances of immunoaffinity separation methods and their contributions to the proteomic applications in human biofluids. The limitations and future perspectives of immunoaffinity separation methodsmore » are also discussed.« less
Isotope separation by photoselective dissociative electron capture
Stevens, C.G.
1978-08-29
Disclosed is a method of separating isotopes based on photoselective electron capture dissociation of molecules having an electron capture cross section dependence on the vibrational state of the molecule. A molecular isotope source material is irradiated to selectively excite those molecules containing a desired isotope to a predetermined vibrational state having associated therewith an electron capture energy region substantially non-overlapping with the electron capture energy ranges associated with the lowest vibration states of the molecules. The isotope source is also subjected to electrons having an energy corresponding to the non-overlapping electron capture region whereby the selectively excited molecules preferentially capture electrons and dissociate into negative ions and neutrals. The desired isotope may be in the negative ion product or in the neutral product depending upon the mechanism of dissociation of the particular isotope source used. The dissociation product enriched in the desired isotope is then separated from the reaction system by conventional means. Specifically, [sup 235]UF[sub 6] is separated from a UF[sub 6] mixture by selective excitation followed by dissociative electron capture into [sup 235]UF[sub 5]- and F. 2 figs.
Isotope separation by photoselective dissociative electron capture
Stevens, Charles G. [Pleasanton, CA
1978-08-29
A method of separating isotopes based on photoselective electron capture dissociation of molecules having an electron capture cross section dependence on the vibrational state of the molecule. A molecular isotope source material is irradiated to selectively excite those molecules containing a desired isotope to a predetermined vibrational state having associated therewith an electron capture energy region substantially non-overlapping with the electron capture energy ranges associated with the lowest vibration states of the molecules. The isotope source is also subjected to electrons having an energy corresponding to the non-overlapping electron capture region whereby the selectively excited molecules preferentially capture electrons and dissociate into negative ions and neutrals. The desired isotope may be in the negative ion product or in the neutral product depending upon the mechanism of dissociation of the particular isotope source used. The dissociation product enriched in the desired isotope is then separated from the reaction system by conventional means. Specifically, .sup.235 UF.sub.6 is separated from a UF.sub.6 mixture by selective excitation followed by dissociative electron capture into .sup.235 UF.sub.5 - and F.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B; Reyes, J; Wong, J
Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less
Source separation on hyperspectral cube applied to dermatology
NASA Astrophysics Data System (ADS)
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
NASA Astrophysics Data System (ADS)
Jacobson, Abram R.; Shao, Xuan-Min
2001-07-01
The Earth's ionosphere is magnetized by the geomagnetic field and imposes birefringent modulation on VHF radio signals propagating through the ionosphere. Satellites viewing VHF emissions from terrestrial sources receive ordinary and extraordinary modes successively from each broadband pulse emitted by the source. The birefringent intermode frequency separation can be used to determine the value of ƒce cos β, where ƒce is the electron gyrofrequency and β is the angle between the wave vector k and the geomagnetic field B at the point where the VHF ray path intersects the ionosphere. Successive receptions of multiple signals (from the same source) cause variation in ƒce cos β, and from the resulting variation in the signal intermode frequency separation the source location on Earth can be inferred. We test the method with signals emitted by the Los Alamos Portable Pulser and received by the FORTE satellite.
Probing interferometric parallax with interplanetary spacecraft
NASA Astrophysics Data System (ADS)
Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L.
2017-07-01
We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities.
Enantioselective separation of all-E-astaxanthin and its determination in microbial sources.
Grewe, Claudia; Menge, Sieglinde; Griehl, Carola
2007-09-28
A method for the enantioselective separation of all-E-astaxanthin (3,3'-dihydroxy-beta,beta-carotene-4,4'-dione), an important colorant in the feed industry, was developed. Different chiral stationary phases (CSPs) such as Pirkle phases (R,R Ulmo and l-leucine), modified polysaccharides and a beta-cyclodextrin have been investigated on their separation performance of astaxanthin enantiomers. Direct resolution was only achieved employing the Chiralcel OD-RH (cellulose-tris-3,5-dimethylphenyl-carbamate) under reversed phase conditions. The chiral separation of the enantiomeric forms of astaxanthin produced in microalgae and yeasts was reported. The yeast Xanthophyllomyces sp. produces astaxanthin predominantly in the R,R configuration, whereas in the green microalgae Scenedesmus sp. astaxanthin is built primarily in the S,S form. The separation method for the identification of astaxanthin enantiomers is of great interest since astaxanthin is used as functional food additive in human nutrition. Moreover the method may be used as a food chain indicator in farmed salmon.
Surface acoustical intensity measurements on a diesel engine
NASA Technical Reports Server (NTRS)
Mcgary, M. C.; Crocker, M. J.
1980-01-01
The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.
Rapid determination of 210Po in water samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.
2013-08-02
A new rapid method for the determination of 210Po in water samples has been developed at the Savannah River National Laboratory (SRNL) that can be used for emergency response or routine water analyses. If a radiological dispersive device (RDD) event or a radiological attack associated with drinking water supplies occurs, there will be an urgent need for rapid analyses of water samples, including drinking water, ground water and other water effluents. Current analytical methods for the assay of 210Po in water samples have typically involved spontaneous auto-deposition of 210Po onto silver or other metal disks followed by counting by alphamore » spectrometry. The auto-deposition times range from 90 minutes to 24 hours or more, at times with yields that may be less than desirable. If sample interferences are present, decreased yields and degraded alpha spectrums can occur due to unpredictable thickening in the deposited layer. Separation methods have focused on the use of Sr Resin, often in combination with 210Pb analysis. A new rapid method for 210Po in water samples has been developed at the Savannah River National Laboratory (SRNL) that utilizes a rapid calcium phosphate co-precipitation method, separation using DGA Resin (N,N,N,N-tetraoctyldiglycolamide extractant-coated resin, Eichrom Technologies or Triskem-International), followed by rapid microprecipitation of 210Po using bismuth phosphate for counting by alpha spectrometry. This new method can be performed quickly with excellent removal of interferences, high chemical yields and very good alpha peak resolution, eliminating any potential problems with the alpha source preparation for emergency or routine samples. A rapid sequential separation method to separate 210Po and actinide isotopes was also developed. This new approach, rapid separation with DGA Resin plus microprecipitation for alpha source preparation, is a significant advance in radiochemistry for the rapid determination of 210Po.« less
Assessing and measuring wetland hydrology
Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.
2013-01-01
Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.
System and method for treatment of a flue gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spiry, Irina Pavlovna; Wood, Benjamin Rue; Singh, Surinder Prabhjot
A method for treatment of a flue gas involves feeding the flue gas and a lean solvent to an absorber. The method further involves reacting the flue gas with the lean solvent within the absorber to generate a clean flue gas and a rich solvent. The method also involves feeding the clean flue gas from the absorber and water from a source, to a wash tower to separate a stripped portion of the lean solvent from the clean flue gas to generate a washed clean flue gas and a mixture of the water and the stripped portion of the leanmore » solvent. The method further involves treating at least a portion of the mixture of the water and the stripped portion of the lean solvent via a separation system to separate the water from the stripped portion of the lean solvent.« less
Systematic study of target localization for bioluminescence tomography guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jingjing; Zhang, Bin; Reyes, Juvenal
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.
2012-09-04
Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.
Microorganism mediated liquid fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troiano, Richard
Herein disclosed is a method for producing liquid hydrocarbon product, the method comprising disintegrating a hydrocarbon source; pretreating the disintegrated hydrocarbon source; solubilizing the disintegrated hydrocarbon source to form a slurry comprising a reactant molecule of the hydrocarbon source; admixing a biochemical liquor into the slurry, wherein the biochemical liquor comprises at least one conversion enzyme configured to facilitate bond selective photo-fragmentation of said reactant molecule of the hydrocarbon source, to form liquid hydrocarbons via enzyme assisted bond selective photo-fragmentation, wherein said conversion enzyme comprises reactive sites configured to restrict said reactant molecule such that photo-fragmentation favorably targets a preselectedmore » internal bond of said reactant molecule; separating the liquid hydrocarbons from the slurry, wherein contaminants remain in the slurry; and enriching the liquid hydrocarbons to form a liquid hydrocarbon product. Various aspects of such method/process are also discussed.« less
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Maksimovic, Svetolik; Tadic, Vanja; Skala, Dejan; Zizovic, Irena
2017-06-01
Helichrysum italicum presents a valuable source of natural bioactive compounds. In this work, a literature review of terpenes, phenolic compounds, and other less common phytochemicals from H. italicum with regard to application of different separation methods is presented. Data including extraction/separation methods and experimental conditions applied, obtained yields, number of identified compounds, content of different compound groups, and analytical techniques applied are shown as corresponding tables. Numerous biological activities of both isolates and individual compounds are emphasized. In addition, the data reported are discussed, and the directions for further investigations are proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Contributions of immunoaffinity chromatography to deep proteome profiling of human biofluids
Wu, Chaochao; Duan, Jicheng; Liu, Tao; ...
2016-01-12
Human biofluids, especially blood plasma or serum, hold great potential as the sources of candidate biomarkers for various diseases; however, the enormous dynamic range of protein concentrations in biofluids represents a significant analytical challenge for detecting promising low-abundance proteins. Over the last decade, various immunoaffinity chromatographic methods have been developed and routinely applied for separating low-abundance proteins from the high- and moderate-abundance proteins, thus enabling much more effective detection of low-abundance proteins. Herein, we review the advances of immunoaffinity separation methods and their contributions to the proteomic applications in human biofluids. The limitations and future perspectives of immunoaffinity separation methodsmore » are also discussed.« less
Source Separation of Heartbeat Sounds for Effective E-Auscultation
NASA Astrophysics Data System (ADS)
Geethu, R. S.; Krishnakumar, M.; Pramod, K. V.; George, Sudhish N.
2016-03-01
This paper proposes a cost effective solution for improving the effectiveness of e-auscultation. Auscultation is the most difficult skill for a doctor, since it can be acquired only through experience. The heart sound mixtures are captured by placing the four numbers of sensors at appropriate auscultation area in the body. These sound mixtures are separated to its relevant components by a statistical method independent component analysis. The separated heartbeat sounds can be further processed or can be stored for future reference. This idea can be used for making a low cost, easy to use portable instrument which will be beneficial to people living in remote areas and are unable to take the advantage of advanced diagnosis methods.
Restrictive loads powered by separate or by common electrical sources
NASA Technical Reports Server (NTRS)
Appelbaum, J.
1989-01-01
In designing a multiple load electrical system, the designer may wish to compare the performance of two setups: a common electrical source powering all loads, or separate electrical sources powering individual loads. Three types of electrical sources: an ideal voltage source, an ideal current source, and solar cell source powering resistive loads were analyzed for their performances in separate and common source systems. A mathematical proof is given, for each case, indicating the merit of the separate or common source system. The main conclusions are: (1) identical resistive loads powered by ideal voltage sources perform the same in both system setups, (2) nonidentical resistive loads powered by ideal voltage sources perform the same in both system setups, (3) nonidentical resistive loads powered by ideal current sources have higher performance in separate source systems, and (4) nonidentical resistive loads powered by solar cells have higher performance in a common source system for a wide range of load resistances.
Xu, Yi-Fan; Lu, Wenyun; Rabinowitz, Joshua D.
2015-01-15
Liquid chromatography–mass spectrometry (LC-MS) technology allows for rapid quantitation of cellular metabolites, with metabolites identified by mass spectrometry and chromatographic retention time. Recently, with the development of rapid scanning high-resolution high accuracy mass spectrometers and the desire for high throughput screening, minimal or no chromatographic separation has become increasingly popular. Furthermore, when analyzing complex cellular extracts, however, the lack of chromatographic separation could potentially result in misannotation of structurally related metabolites. Here, we show that, even using electrospray ionization, a soft ionization method, in-source fragmentation generates unwanted byproducts of identical mass to common metabolites. For example, nucleotide-triphosphates generate nucleotide-diphosphates, andmore » hexose-phosphates generate triose-phosphates. We also evaluated yeast intracellular metabolite extracts and found more than 20 cases of in-source fragments that mimic common metabolites. Finally and accordingly, chromatographic separation is required for accurate quantitation of many common cellular metabolites.« less
System identification through nonstationary data using Time-Frequency Blind Source Separation
NASA Astrophysics Data System (ADS)
Guo, Yanlin; Kareem, Ahsan
2016-06-01
Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the proposed method is evaluated using a full-scale non-stationary response of a tall building during an earthquake and found it to perform satisfactorily.
The Other Side of Method Bias: The Perils of Distinct Source Research Designs
ERIC Educational Resources Information Center
Kammeyer-Mueller, John; Steel, Piers D. G.; Rubenstein, Alex
2010-01-01
Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source…
Efficient growth of HTS films with volatile elements
Siegal, M.P.; Overmyer, D.L.; Dominguez, F.
1998-12-22
A system is disclosed for applying a volatile element-HTS layer, such as Tl-HTS, to a substrate in a multiple zone furnace, said method includes heating at higher temperature, in one zone of the furnace, a substrate and adjacent first source of Tl-HTS material, to sublimate Tl-oxide from the source to the substrate; and heating at lower temperature, in a separate zone of the furnace, a second source of Tl-oxide to replenish the first source of Tl-oxide from the second source. 3 figs.
A Cabin Air Separator for EVA Oxygen
NASA Technical Reports Server (NTRS)
Graf, John C.
2011-01-01
Presently, the Extra-Vehicular Activities (EVAs) conducted from the Quest Joint Airlock on the International Space Station use high pressure, high purity oxygen that is delivered to the Space Station by the Space Shuttle. When the Space Shuttle retires, a new method of delivering high pressure, high purity oxygen to the High Pressure Gas Tanks (HPGTs) is needed. One method is to use a cabin air separator to sweep oxygen from the cabin air, generate a low pressure/high purity oxygen stream, and compress the oxygen with a multistage mechanical compressor. A main advantage to this type of system is that the existing low pressure oxygen supply infrastructure can be used as the source of cabin oxygen. ISS has two water electrolysis systems that deliver low pressure oxygen to the cabin, as well as chlorate candles and compressed gas tanks on cargo vehicles. Each of these systems can feed low pressure oxygen into the cabin, and any low pressure oxygen source can be used as an on-board source of oxygen. Three different oxygen separator systems were evaluated, and a two stage Pressure Swing Adsorption system was selected for reasons of technical maturity. Two different compressor designs were subjected to long term testing, and the compressor with better life performance and more favorable oxygen safety characteristics was selected. These technologies have been used as the basis of a design for a flight system located in Equipment Lock, and taken to Preliminary Design Review level of maturity. This paper describes the Cabin Air Separator for EVA Oxygen (CASEO) concept, describes the separator and compressor technology trades, highlights key technology risks, and describes the flight hardware concept as presented at Preliminary Design Review (PDR)
Peterman, Dean R [Idaho Falls, ID; Klaehn, John R [Idaho Falls, ID; Harrup, Mason K [Idaho Falls, ID; Tillotson, Richard D [Moore, ID; Law, Jack D [Pocatello, ID
2010-09-21
Methods of separating actinides from lanthanides are disclosed. A regio-specific/stereo-specific dithiophosphinic acid having organic moieties is provided in an organic solvent that is then contacted with an acidic medium containing an actinide and a lanthanide. The method can extend to separating actinides from one another. Actinides are extracted as a complex with the dithiophosphinic acid. Separation compositions include an aqueous phase, an organic phase, dithiophosphinic acid, and at least one actinide. The compositions may include additional actinides and/or lanthanides. A method of producing a dithiophosphinic acid comprising at least two organic moieties selected from aromatics and alkyls, each moiety having at least one functional group is also disclosed. A source of sulfur is reacted with a halophosphine. An ammonium salt of the dithiophosphinic acid product is precipitated out of the reaction mixture. The precipitated salt is dissolved in ether. The ether is removed to yield the dithiophosphinic acid.
Sources and concentrations of aldehydes and ketones in indoor environments in the UK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crump, D.R.; Gardiner, D.
1989-01-01
Individual aldehydes and ketones can be separated, identified and quantitatively estimated by trapping the 2,4-dinitrophenylhydrazine (DNPH) derivatives and analysis by HPLC. Appropriate methods and detection limits are reported. Many sources of formaldehyde have been identified by this means and some are found to emit other aldehydes and ketones. The application of this method to determine the concentration of these compounds in the atmospheres of buildings is described and the results compared with those obtained using chromotropic acid or MBTH.
Method of treating waste water
Deininger, J. Paul; Chatfield, Linda K.
1991-01-01
A process of treating water to remove transuranic elements contained therein by adjusting the pH of a transuranic element-containing water source to within the range of about 6.5 to about 14.0, admixing the water source with an alkali or alkaline earth ferrate in an amount sufficient to form a precipitate within the water source, the amount of ferrate effective to reduce the transuranic element concentration in the water source, permitting the precipitate in the admixture to separate and thereby yield a supernatant liquid having a reduced transuranic element concentration, and separating the supernatant liquid having the reduced transuranic element concentration from the admixture is provided. Additionally, a water soluble salt, e.g., a zirconium salt, can be added with the alkali or alkaline earth ferrate in the process to provide greater removal efficiencies. A composition of matter including an alkali or alkaline earth ferrate and a water soluble salt, e.g., a zirconium salt, is also provided.
Rapid fusion method for the determination of Pu, Np, and Am in large soil samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2015-02-14
A new rapid sodium hydroxide fusion method for the preparation of 10-20 g soil samples has been developed by the Savannah River National Laboratory (SRNL). The method enables lower detection limits for plutonium, neptunium, and americium in environmental soil samples. The method also significantly reduces sample processing time and acid fume generation compared to traditional soil digestion techniques using hydrofluoric acid. Ten gram soil aliquots can be ashed and fused using the new method in 1-2 hours, completely dissolving samples, including refractory particles. Pu, Np and Am are separated using stacked 2mL cartridges of TEVA and DGA Resin and measuredmore » using alpha spectrometry. The method can be adapted for measurement by inductively-coupled plasma mass spectrometry (ICP-MS). Two 10 g soil aliquots of fused soil may be combined prior to chromatographic separations to further improve detection limits. Total sample preparation time, including chromatographic separations and alpha spectrometry source preparation, is less than 8 hours.« less
Spatiotemporal patterns of ERP based on combined ICA-LORETA analysis
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Guo, Taomei; Xu, Yaqin; Zhao, Xiaojie; Yao, Li
2007-03-01
In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and effective method to study brain dynamics from ERP.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
Laser-assisted isotope separation of tritium
Herman, Irving P.; Marling, Jack B.
1983-01-01
Methods for laser-assisted isotope separation of tritium, using infrared multiple photon dissociation of tritium-bearing products in the gas phase. One such process involves the steps of (1) catalytic exchange of a deuterium-bearing molecule XYD with tritiated water DTO from sources such as a heavy water fission reactor, to produce the tritium-bearing working molecules XYT and (2) photoselective dissociation of XYT to form a tritium-rich product. By an analogous procedure, tritium is separated from tritium-bearing materials that contain predominately hydrogen such as a light water coolant from fission or fusion reactors.
Samadi, Samareh; Amini, Ladan; Cosandier-Rimélé, Delphine; Soltanian-Zadeh, Hamid; Jutten, Christian
2013-01-01
In this paper, we present a fast method to extract the sources related to interictal epileptiform state. The method is based on general eigenvalue decomposition using two correlation matrices during: 1) periods including interictal epileptiform discharges (IED) as a reference activation model and 2) periods excluding IEDs or abnormal physiological signals as background activity. After extracting the most similar sources to the reference or IED state, IED regions are estimated by using multiobjective optimization. The method is evaluated using both realistic simulated data and actual intracerebral electroencephalography recordings of patients suffering from focal epilepsy. These patients are seizure-free after the resective surgery. Quantitative comparisons of the proposed IED regions with the visually inspected ictal onset zones by the epileptologist and another method of identification of IED regions reveal good performance. PMID:23428609
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanemoto, S.; Andoh, Y.; Sandoz, S.A.
1984-10-01
A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less
Status of flow separation prediction in liquid propellant rocket nozzles
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1974-01-01
Flow separation which plays an important role in the design of a rocket engine nozzle is discussed. For a given ambient pressure, the condition of no flow separation limits the area ratio and, therefore, the vacuum performance. Avoidance of performance loss due to area ratio limitation requires a correct prediction of the flow separation conditions. To provide a better understanding of the flow separation process, the principal behavior of flow separation in a supersonic overexpanded rocket nozzle is described. The hot firing separation tests from various sources are summarized, and the applicability and accuracy of the measurements are described. A comparison of the different data points allows an evaluation of the parameters that affect flow separation. The pertinent flow separation predicting methods, which are divided into theoretical and empirical correlations, are summarized and the numerical results are compared with the experimental points.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Californium purification and electrodeposition
Burns, Jonathan D.; Van Cleve, Shelley M.; Smith, Edward Hamilton; ...
2014-11-30
The staff at the Radiochemical Engineering Development Center, located at Oak Ridge National Laboratory, produced a 6.3 ± 0.4 GBq (1.7 ± 0.1 Ci) 252Cf source for the Californium Rare Isotope Breeder Upgrade (CARIBU) project at Argonne National Laboratory’s Argonne Tandem Linac Accelerator System. The source was produced by electrodeposition of a 252Cf sample onto a stainless steel substrate, which required material free from excess mass for efficient deposition. The resulting deposition was the largest reported 252Cf electrodeposition source ever produced. Several different chromatographic purification methods were investigated to determine which would be most effective for final purification of themore » feed material used for the CARIBU source. The separation of lanthanides from the Cf was of special concern. Furthermore, the separation, using 145Sm, 153Gd, and 249Cf as tracers, was investigated using BioRad AG 50X8 in α-hydroxyisobutyric acid, Eichrom LN resin in both HNO 3 and HCl, and Eichrom TEVA resin in NH 4SCN. The TEVA NH 4SCN system was found to completely separate 145Sm and 153Gd from 249Cf and was adopted into the purification process used in purifying the 252Cf.« less
Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method
NASA Astrophysics Data System (ADS)
Li, Jiahang; Shen, Yang; Zhang, Wei
2018-02-01
At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.
Grinding and cooking dry-mill germ to optimize aqueous enzymatic oil extraction
USDA-ARS?s Scientific Manuscript database
The many recent dry grind plants that convert corn to ethanol are potential sources of substantial amounts of corn oil. This report describes an aqueous enzymatic extraction (AEE) method to separate oil from dry-mill corn germ (DMG). The method is an extension of AEE previously developed for wet...
Magnetic anomalies in east Pacific using MAGSAT data
NASA Technical Reports Server (NTRS)
Harrison, C. G. A. (Principal Investigator)
1983-01-01
Methods for solving problems encountered in separating the core field from the crustal field are summarized as well as those methods developed for inverting total magnetic field data to obtain source functions for oceanic areas. Accounting for magnetization contrasts and the magnetization values measured in rocks of marine origin are also discussed.
Nanoscale wicking methods and devices
NASA Technical Reports Server (NTRS)
Zhou, Jijie (Inventor); Bronikowski, Michael (Inventor); Noca, Flavio (Inventor); Sansom, Elijah B. (Inventor)
2011-01-01
A fluid transport method and fluid transport device are disclosed. Nanoscale fibers disposed in a patterned configuration allow transport of a fluid in absence of an external power source. The device may include two or more fluid transport components having different fluid transport efficiencies. The components may be separated by additional fluid transport components, to control fluid flow.
Solar quiet day ionospheric source current in the West African region.
Obiekezie, Theresa N; Okeke, Francisca N
2013-05-01
The Solar Quiet (Sq) day source current were calculated using the magnetic data obtained from a chain of 10 magnetotelluric stations installed in the African sector during the French participation in the International Equatorial Electrojet Year (IEEY) experiment in Africa. The components of geomagnetic field recorded at the stations from January-December in 1993 during the experiment were separated into the source and (induced) components of Sq using Spherical Harmonics Analysis (SHA) method. The range of the source current was calculated and this enabled the viewing of a full year's change in the source current system of Sq.
Unmixing Magnetic Hysteresis Loops
NASA Astrophysics Data System (ADS)
Heslop, D.; Roberts, A. P.
2012-04-01
Magnetic hysteresis loops provide important information in rock and environmental magnetic studies. Natural samples often contain an assemblage of magnetic particles composed of components with different origins. Each component potentially carries important environmental information. Hysteresis loops, however, provide information concerning the bulk magnetic assemblage, which makes it difficult to isolate the specific contributions from different sources. For complex mineral assemblages an unmixing strategy with which to separate hysteresis loops into their component parts is therefore essential. Previous methods to unmix hysteresis data have aimed at separating individual loops into their constituent parts using libraries of type-curves thought to correspond to specific mineral types. We demonstrate an alternative approach, which rather than decomposing a single loop into monomineralic contributions, examines a collection of loops to determine their constituent source materials. These source materials may themselves be mineral mixtures, but they provide a genetically meaningful decomposition of a magnetic assemblage in terms of the processes that controlled its formation. We show how an empirically derived hysteresis mixing space can be created, without resorting to type-curves, based on the co-variation within a collection of measured loops. Physically realistic end-members, which respect the expected behaviour and symmetries of hysteresis loops, can then be extracted from the mixing space. These end-members allow the measured loops to be described as a combination of invariant parts that are assumed to represent the different sources in the mixing model. Particular attention is paid to model selection and estimating the complexity of the mixing model, specifically, how many end-members should be included. We demonstrate application of this approach using lake sediments from Butte Valley, northern California. Our method successfully separates the hysteresis loops into sources with a variety of terrigenous and authigenic origins.
Quantum Theory of Superresolution for Incoherent Optical Imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
NASA Astrophysics Data System (ADS)
Imfeld, A.; Ouellet, A.; Gelinas, Y.
2016-12-01
Crude oil and petroleum products are continually being introduced into the environment during transportation, production, consumption and storage. Source identification of these organic contaminants proves challenging due to a variety of factors; samples tend to be convoluted, compounds need to be separated from an unresolved complex mixtures of highly altered aliphatic and aromatic compounds, and chemical composition and biomarker distributions can be altered by weathering, aging, and degradation processes. The aim of our research is to optimize a molecular and isotopic (δ13C, δ2H) method to fingerprint and identify petroleum contaminants in soil and sediment matrices, and to trace the temporal and spatial extent of the contamination event. This method includes the extraction, separation and analysis of the petroleum derived hydrocarbons. Sample extraction and separation is achieved using sonication, column chromatography and urea adduction. Compound identification and molecular/isotopic fingerprinting is obtained by gas chromatography with flame ionization (GC-FID) and mass spectrometer (GC-MS) detection, as well as gas chromatography coupled to an isotope ratio mass spectrometer (GC-IRMS). This method will be used to assist the Centre d'Expertise en Analyse Environnementale du Québec to determine the nature, sources and timing of contamination events as well as for investigating the residual contamination involving petroleum products.
An Operationally Simple Method for Separating the Rare-Earth Elements Neodymium and Dysprosium.
Bogart, Justin A; Lippincott, Connor A; Carroll, Patrick J; Schelter, Eric J
2015-07-06
Rare-earth metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare earths. To incentivize recycling there is a clear need for simple methods for targeted separations of mixtures of rare-earth metal salts. Metal complexes of a tripodal nitroxide ligand [{(2-(t) BuNO)C6 H4 CH2 }3 N](3-) (TriNOx(3-) ), feature a size-sensitive aperture formed of its three η(2) -(N,O) ligand arms. Exposure of metal cations in the aperture induces a self-associative equilibrium comprising [M(TriNOx)thf]/ [M(TriNOx)]2 (M=rare-earth metal). Differences in the equilibrium constants (Keq ) for early and late metals enables simple Nd/Dy separations through leaching with a separation ratio SNd/Dy =359. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Buttler, W. T.; Hixson, R. S.; King, N. S. P.; Olson, R. T.; Rigg, P. A.; Zellner, M. B.; Routley, N.; Rimmer, A.
2007-04-01
The authors consider a mathematical method to separate and determine the amount of ejecta produced in a second-shock material-fragmentation process. The technique is theoretical and assumes that a material undergoing a shock release at a vacuum interface ejects particulate material or fragments as the initial shock unloads and reflects at the vacuum-surface interface. In this case it is thought that the reflected shock may reflect again at the source of the shock and return to the vacuum-surface interface and eject another amount of fragments or particulate material.
Thibodeau, C; Monette, F; Glaus, M; Laflamme, C B
2011-01-01
The black water and grey water source-separation sanitation system aims at efficient use of energy (biogas), water and nutrients but currently lacks evidence of economic viability to be considered a credible alternative to the conventional system. This study intends to demonstrate economic viability, identify main cost contributors and assess critical influencing factors. A technico-economic model was built based on a new neighbourhood in a Canadian context. Three implementation scales of source-separation system are defined: 500, 5,000 and 50,000 inhabitants. The results show that the source-separation system is 33% to 118% more costly than the conventional system, with the larger cost differential obtained by lower source-separation system implementation scales. A sensitivity analysis demonstrates that vacuum toilet flow reduction from 1.0 to 0.25 L/flush decreases source-separation system cost between 23 and 27%. It also shows that high resource costs can be beneficial or unfavourable to the source-separation system depending on whether the vacuum toilet flow is low or normal. Therefore, the future of this configuration of the source-separation system lies mainly in vacuum toilet flow reduction or the introduction of new efficient effluent volume reduction processes (e.g. reverse osmosis).
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
A new DOD and DOA estimation method for MIMO radar
NASA Astrophysics Data System (ADS)
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2018-04-01
The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.
2015-10-30
pressure values onto the SD card. The addition of free and open-source Arduino libraries allowed for the seamless integration of the shield into the...alert the user when replacing the separator is necessary. Methods: A sensor was built to measure and record differential pressure values within the...from the transducers during simulated blockages were transformed into pressure values using linear regression equations from the calibration data
Contributions of immunoaffinity chromatography to deep proteome profiling of human biofluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chaochao; Duan, Jicheng; Liu, Tao
2016-05-01
Human biofluids, especially blood plasma or serum, hold great potential as the sources of potential biomarkers for various diseases; however, the enormous dynamic range of protein concentrations in biofluids represents a significant analytical challenge to detect promising low-abundance protein biomarkers. Over the last decade, various immunoaffinity chromatographic methods have been developed and routinely applied for separating low-abundance proteins from the high and moderate-abundance proteins, thus enabling more effective detection of low-abundance proteins. Herein, we review the advances of immunoaffinity separation methods and their contributions to the proteomics applications of different human biofluids. The limitations and future perspective of immunoaffinity separationmore » methods are also discussed.« less
Common source-multiple load vs. separate source-individual load photovoltaic system
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph
1989-01-01
A comparison of system performance is made for two possible system setups: (1) individual loads powered by separate solar cell sources; and (2) multiple loads powered by a common solar cell source. A proof for resistive loads is given that shows the advantage of a common source over a separate source photovoltaic system for a large range of loads. For identical loads, both systems perform the same.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gencaga, Deniz; Knuth, Kevin H.; Carbon, Duane F.
Understanding the origins of life has been one of the greatest dreams throughout history. It is now known that star-forming regions contain complex organic molecules, known as Polycyclic Aromatic Hydrocarbons (PAHs), each of which has particular infrared spectral characteristics. By understanding which PAH species are found in specific star-forming regions, we can better understand the biochemistry that takes place in interstellar clouds. Identifying and classifying PAHs is not an easy task: we can only observe a single superposition of PAH spectra at any given astrophysical site, with the PAH species perhaps numbering in the hundreds or even thousands. This ismore » a challenging source separation problem since we have only one observation composed of numerous mixed sources. However, it is made easier with the help of a library of hundreds of PAH spectra. In order to separate PAH molecules from their mixture, we need to identify the specific species and their unique concentrations that would provide the given mixture. We develop a Bayesian approach for this problem where sources are separated from their mixture by Metropolis Hastings algorithm. Separated PAH concentrations are provided with their error bars, illustrating the uncertainties involved in the estimation process. The approach is demonstrated on synthetic spectral mixtures using spectral resolutions from the Infrared Space Observatory (ISO). Performance of the method is tested for different noise levels.« less
Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T
2015-08-01
An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).
Bulk and surface event identification in p-type germanium detectors
NASA Astrophysics Data System (ADS)
Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.
2018-04-01
The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S
2018-06-01
Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.
2018-06-01
Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
NASA Technical Reports Server (NTRS)
Hisamoto, Chuck (Inventor); Arzoumanian, Zaven (Inventor); Sheikh, Suneel I. (Inventor)
2015-01-01
A method and system for spacecraft navigation using distant celestial gamma-ray bursts which offer detectable, bright, high-energy events that provide well-defined characteristics conducive to accurate time-alignment among spatially separated spacecraft. Utilizing assemblages of photons from distant gamma-ray bursts, relative range between two spacecraft can be accurately computed along the direction to each burst's source based upon the difference in arrival time of the burst emission at each spacecraft's location. Correlation methods used to time-align the high-energy burst profiles are provided. The spacecraft navigation may be carried out autonomously or in a central control mode of operation.
Bogart, Justin A.; Cole, Bren E.; Boreen, Michael A.; Lippincott, Connor A.; Manor, Brian C.; Carroll, Patrick J.; Schelter, Eric J.
2016-01-01
Rare earth (RE) metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare REs. To incentivize recycling, there is a clear need for the development of simple methods for targeted separations of mixtures of RE metal salts. Metal complexes of a tripodal hydroxylaminato ligand, TriNOx3–, featured a size-sensitive aperture formed of its three η2-(N,O) ligand arms. Exposure of cations in the aperture induced a self-associative equilibrium comprising RE(TriNOx)THF and [RE(TriNOx)]2 species. Differences in the equilibrium constants Kdimer for early and late metals enabled simple separations through leaching. Separations were performed on RE1/RE2 mixtures, where RE1 = La–Sm and RE2 = Gd–Lu, with emphasis on Eu/Y separations for potential applications in the recycling of phosphor waste from compact fluorescent light bulbs. Using the leaching method, separations factors approaching 2,000 were obtained for early–late RE combinations. Following solvent optimization, >95% pure samples of Eu were obtained with a 67% recovery for the technologically relevant Eu/Y separation. PMID:27956636
Bogart, Justin A; Cole, Bren E; Boreen, Michael A; Lippincott, Connor A; Manor, Brian C; Carroll, Patrick J; Schelter, Eric J
2016-12-27
Rare earth (RE) metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare REs. To incentivize recycling, there is a clear need for the development of simple methods for targeted separations of mixtures of RE metal salts. Metal complexes of a tripodal hydroxylaminato ligand, TriNOx 3- , featured a size-sensitive aperture formed of its three η 2 -(N,O) ligand arms. Exposure of cations in the aperture induced a self-associative equilibrium comprising RE(TriNOx)THF and [RE(TriNOx)] 2 species. Differences in the equilibrium constants K dimer for early and late metals enabled simple separations through leaching. Separations were performed on RE1/RE2 mixtures, where RE1 = La-Sm and RE2 = Gd-Lu, with emphasis on Eu/Y separations for potential applications in the recycling of phosphor waste from compact fluorescent light bulbs. Using the leaching method, separations factors approaching 2,000 were obtained for early-late RE combinations. Following solvent optimization, >95% pure samples of Eu were obtained with a 67% recovery for the technologically relevant Eu/Y separation.
Informed Source Separation: A Bayesian Tutorial
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.
2005-01-01
Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.
Research on preventive technologies for bed-separation water hazard in China coal mines
NASA Astrophysics Data System (ADS)
Gui, Herong; Tong, Shijie; Qiu, Weizhong; Lin, Manli
2018-03-01
Bed-separation water is one of the major water hazards in coal mines. Targeted researches on the preventive technologies are of paramount importance to safe mining. This article studied the restrictive effect of geological and mining factors, such as lithological properties of roof strata, coal seam inclination, water source to bed separations, roof management method, dimensions of mining working face, and mining progress, on the formation of bed-separation water hazard. The key techniques to prevent bed-separation water-related accidents include interception, diversion, destructing the buffer layer, grouting and backfilling, etc. The operation and efficiency of each technique are corroborated in field engineering cases. The results of this study will offer reference to countries with similar mining conditions in the researches on bed-separation water burst and hazard control in coal mines.
Solar quiet day ionospheric source current in the West African region
Obiekezie, Theresa N.; Okeke, Francisca N.
2012-01-01
The Solar Quiet (Sq) day source current were calculated using the magnetic data obtained from a chain of 10 magnetotelluric stations installed in the African sector during the French participation in the International Equatorial Electrojet Year (IEEY) experiment in Africa. The components of geomagnetic field recorded at the stations from January–December in 1993 during the experiment were separated into the source and (induced) components of Sq using Spherical Harmonics Analysis (SHA) method. The range of the source current was calculated and this enabled the viewing of a full year’s change in the source current system of Sq. PMID:25685434
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
NASA Astrophysics Data System (ADS)
Liu, Luyao; Feng, Minquan
2018-03-01
[Objective] This study quantitatively evaluated risk probabilities of sudden water pollution accidents under the influence of risk sources, thus providing an important guarantee for risk source identification during water diversion from the Hanjiang River to the Weihe River. [Methods] The research used Bayesian networks to represent the correlation between accidental risk sources. It also adopted the sequential Monte Carlo algorithm to combine water quality simulation with state simulation of risk sources, thereby determining standard-exceeding probabilities of sudden water pollution accidents. [Results] When the upstream inflow was 138.15 m3/s and the average accident duration was 48 h, the probabilities were 0.0416 and 0.0056 separately. When the upstream inflow was 55.29 m3/s and the average accident duration was 48 h, the probabilities were 0.0225 and 0.0028 separately. [Conclusions] The research conducted a risk assessment on sudden water pollution accidents, thereby providing an important guarantee for the smooth implementation, operation, and water quality of the Hanjiang-to-Weihe River Diversion Project.
NASA Astrophysics Data System (ADS)
Geddes, Earl Russell
The details of the low frequency sound field for a rectangular room can be studied by the use of an established analytic technique--separation of variables. The solution is straightforward and the results are well-known. A non -rectangular room has boundary conditions which are not separable and therefore other solution techniques must be used. This study shows that the finite element method can be adapted for use in the study of sound fields in arbitrary shaped enclosures. The finite element acoustics problem is formulated and the modification of a standard program, which is necessary for solving acoustic field problems, is examined. The solution of the semi-non-rectangular room problem (one where the floor and ceiling remain parallel) is carried out by a combined finite element/separation of variables approach. The solution results are used to construct the Green's function for the low frequency sound field in five rooms (or data cases): (1) a rectangular (Louden) room; (2) The smallest wall of the Louden room canted 20 degrees from normal; (3) The largest wall of the Louden room canted 20 degrees from normal; (4) both the largest and the smallest walls are canted 20 degrees; and (5) a five-sided room variation of Case 4. Case 1, the rectangular room was calculated using both the finite element method and the separation of variables technique. The results for the two methods are compared in order to access the accuracy of the finite element method models. The modal damping coefficient are calculated and the results examined. The statistics of the source and receiver average normalized RMS P('2) responses in the 80 Hz, 100 Hz, and 125 Hz one-third octave bands are developed. The receiver averaged pressure response is developed to determine the effect of the source locations on the response. Twelve source locations are examined and the results tabulated for comparison. The effect of a finite sized source is looked at briefly. Finally, the standard deviation of the spatial pressure response is studied. The results for this characteristic show that it not significantly different in any of the rooms. The conclusions of the study are that only the frequency variations of the pressure response are affected by a room's shape. Further, in general, the simplest modification of a rectangular room (i.e., changing the angle of only one of the smallest walls), produces the most pronounced decrease of the pressure response variations in the low frequency region.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2017-12-01
Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from the Moderate Resolution Imaging Spectroradiometer (MODIS).
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
Cegłowski, Michał; Smoluch, Marek; Reszke, Edward; Silberring, Jerzy; Schroeder, Grzegorz
2016-01-01
A thin-layer chromatography-mass spectrometry (TLC-MS) setup for characterization of low molecular weight compounds separated on standard TLC plates has been constructed. This new approach successfully combines TLC separation, laser ablation, and ionization using flowing atmospheric pressure afterglow (FAPA) source. For the laser ablation, a low-priced 445-nm continuous-wave diode laser pointer, with a power of 1 W, was used. The combination of the simple, low-budget laser pointer and the FAPA ion source has made this experimental arrangement broadly available, also for small laboratories. The approach was successfully applied for the characterization of low molecular weight compounds separated on TLC plates, such as a mixture of pyrazole derivatives, alkaloids (nicotine and sparteine), and an extract from a drug tablet consisting of paracetamol, propyphenazone, and caffeine. The laser pointer used was capable of ablating organic compounds without the need of application of any additional substances (matrices, staining, etc.) on the TLC spots. The detection limit of the proposed method was estimated to be 35 ng/cm(2) of a pyrazole derivative.
Isobar separation at very low energy for AMS
NASA Astrophysics Data System (ADS)
Litherland, A. E.; Tomski, I.; Zhao, X.-L.; Cousins, Lisa M.; Doupé, J. P.; Javahery, G.; Kieser, W. E.
2007-06-01
The separation of atomic and molecular isobars, prior to injection into a tandem accelerator for Accelerator Mass Spectrometry (AMS), is discussed. To accomplish this separation, the anions from a standard sputter ion source are retarded to eV energy. The advantages of using very low energy (eV) for this purpose are twofold. The ionic reactions in gases can be isobar specific and the multiple scattering of the eV ions, unlike that at higher energy, can be controlled in linear radio-frequency multipoles. An example of current interest to AMS practice, the suppression of the S- isobar ions from negative ion sources generating mainly Cl- ions, will be described. It will be argued that this is a universal method for isobar separation prior to AMS, which is applicable to atomic anions and cations as well as their molecular counterparts. This procedure should be applicable to the AMS analysis of most rare radioactive species, as atomic or molecular ions, starting with either anions or cations, with appropriate charge changing. In some cases the ions may be analysable without AMS.
Recycling of high purity selenium from CIGS solar cell waste materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafsson, Anna M.K., E-mail: anna.gustafsson@chalmers.se; Foreman, Mark R.StJ.; Ekberg, Christian
Highlights: • A new method for recycling of selenium from CIGS solar cell materials is presented. • Separation of selenium as selenium dioxide after heating in oxygen atmosphere. • Complete selenium separation after oxidation of <63 μm particles at 800 °C for 1 h. • After reduction of selenium dioxide the selenium purity was higher than 99.999 wt%. - Abstract: Copper indium gallium diselenide (CIGS) is a promising material in thin film solar cell production. To make CIGS solar cells more competitive, both economically and environmentally, in comparison to other energy sources, methods for recycling are needed. In addition tomore » the generally high price of the material, significant amounts of the metals are lost in the manufacturing process. The feasibility of recycling selenium from CIGS through oxidation at elevated temperatures was therefore examined. During oxidation gaseous selenium dioxide was formed and could be separated from the other elements, which remained in solid state. Upon cooling, the selenium dioxide sublimes and can be collected as crystals. After oxidation for 1 h at 800 °C all of the selenium was separated from the CIGS material. Two different reduction methods for reduction of the selenium dioxide to selenium were tested. In the first reduction method an organic molecule was used as the reducing agent in a Riley reaction. In the second reduction method sulphur dioxide gas was used. Both methods resulted in high purity selenium. This proves that the studied selenium separation method could be the first step in a recycling process aimed at the complete separation and recovery of high purity elements from CIGS.« less
NASA Technical Reports Server (NTRS)
Mcgary, Michael C.
1988-01-01
The anticipated application of advanced turboprop propulsion systems is expected to increase the interior noise of future aircraft to unacceptably high levels. The absence of technically and economically feasible noise source-path diagnostic tools has been a prime obstacle in the development of efficient noise control treatments for propeller-driven aircraft. A new diagnostic method that permits the separation and prediction of the fully coherent airborne and structureborne components of the sound radiated by plates or thin shells has been developed. Analytical and experimental studies of the proposed method were performed on an aluminum plate. The results of the study indicate that the proposed method could be used in flight, and has fewer encumbrances than the other diagnostic tools currently available.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaduchak, Gregory; Ward, Michael D.
An apparatus for separating particles from a medium includes a capillary defining a flow path therein that is in fluid communication with a medium source. The medium source includes engineered acoustic contrast capture particle having a predetermined acoustic contrast. The apparatus includes a vibration generator that is operable to produce at least one acoustic field within the flow path. The acoustic field produces a force potential minima for positive acoustic contrast particles and a force potential minima for negative acoustic contrast particles in the flow path and drives the engineered acoustic contrast capture particles to either the force potential minimamore » for positive acoustic contrast particles or the force potential minima for negative acoustic contrast particles.« less
Household food waste separation behavior and the importance of convenience.
Bernstad, Anna
2014-07-01
Two different strategies aiming at increasing household source-separation of food waste were assessed through a case-study in a Swedish residential area (a) use of written information, distributed as leaflets amongst households and (b) installation of equipment for source-segregation of waste with the aim of increasing convenience food waste sorting in kitchens. Weightings of separately collected food waste before and after distribution of written information suggest that this resulted in neither a significant increased amount of separately collected food waste, nor an increased source-separation ratio. After installation of sorting equipment in households, both the amount of separately collected food waste as well as the source-separation ratio increased vastly. Long-term monitoring shows that results where longstanding. Results emphasize the importance of convenience and existence of infrastructure necessary for source-segregation of waste as important factors for household waste recycling, but also highlight the need of addressing these aspects where waste is generated, i.e. already inside the household. Copyright © 2014 Elsevier Ltd. All rights reserved.
Separation of the Magnetic Field into Parts Produced by Internal and External Sources
NASA Astrophysics Data System (ADS)
Lazanja, David
2005-10-01
Given the total magnetic field on a toroidal plasma surface, a method for decomposing the field into a part due to internal currents (often the plasma) and a part due to external currents is presented. The decomposition exploits Laplace theory which is valid in the vacuum region between the plasma surface and the chamber walls. The method does not assume toroidal symmetry, and it is partly based on Merkel's 1986 work on vacuum field computations. A change in the plasma shape is produced by the total normal field perturbation on the plasma surface. This method allows a separation of the total normal field perturbation into a part produced by external currents and a part produced by the plasma response.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
Shashilov, Victor A; Sikirzhytski, Vitali; Popova, Ludmila A; Lednev, Igor K
2010-09-01
Here we report on novel quantitative approaches for protein structural characterization using deep UV resonance Raman (DUVRR) spectroscopy. Specifically, we propose a new method combining hydrogen-deuterium (HD) exchange and Bayesian source separation for extracting the DUVRR signatures of various structural elements of aggregated proteins including the cross-beta core and unordered parts of amyloid fibrils. The proposed method is demonstrated using the set of DUVRR spectra of hen egg white lysozyme acquired at various stages of HD exchange. Prior information about the concentration matrix and the spectral features of the individual components was incorporated into the Bayesian equation to eliminate the ill-conditioning of the problem caused by 100% correlation of the concentration profiles of protonated and deuterated species. Secondary structure fractions obtained by partial least squares (PLS) and least squares support vector machines (LS-SVMs) were used as the initial guess for the Bayessian source separation. Advantages of the PLS and LS-SVMs methods over the classical least squares calibration (CLSC) are discussed and illustrated using the DUVRR data of the prion protein in its native and aggregated forms. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Kou, Hwang-Shang; Lin, Tsai-Pei; Chung, Tang-Chia; Wu, Hsin-Lung
2006-06-01
A simple MEKC method is described for the separation and quantification of seven widely used uricosuric and antigout drugs, including allopurinol (AP), benzbromarone (BZB), colchicine (COL), orotic acid (OA), oxypurinol (OP), probenecid (PB), and sulfinpyrazone (SPZ). The drugs were separated in a BGE of borate buffer (45 mM; pH 9.00) with SDS (20 mM) as the micellar source and the separated drugs were directly monitored with a UV detector (214 nm). Several parameters affecting the separation and analysis of the drugs were studied. Based on the normalized peak-area ratios of the drugs to an internal standard versus the concentration of the drugs, the method is applicable to quantify BZB, COL, and SPZ (each 5-200 microM), AP, OA, OP, and PB (each 10-200 microM) with detection limits (S/N = 3, 0.5 psi, 5 s injection) in the range of 0.6-4.0 microM. The precision (RSD; n = 5) and accuracy (relative error; n = 5) of the method for intraday and interday analyses of the analytes at three levels (30, 120, and 180 microM) are below 4% (n = 3). The method was demonstrated to be suitable for the analysis of AP and COL in commercial tablets with speed and simplicity.
Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin
2015-05-01
Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Sadhu, A.; Narasimhan, S.; Antoni, J.
2017-09-01
Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
SEPHYDRO: An Integrated Multi-Filter Web-Based Tool for Baseflow Separation
NASA Astrophysics Data System (ADS)
Serban, D.; MacQuarrie, K. T. B.; Popa, A.
2017-12-01
Knowledge of baseflow contributions to streamflow is important for understanding watershed scale hydrology, including groundwater-surface water interactions, impact of geology and landforms on baseflow, estimation of groundwater recharge rates, etc. Baseflow (or hydrograph) separation methods can be used as supporting tools in many areas of environmental research, such as the assessment of the impact of agricultural practices, urbanization and climate change on surface water and groundwater. Over the past few decades various digital filtering and graphically-based methods have been developed in an attempt to improve the assessment of the dynamics of the various sources of streamflow (e.g. groundwater, surface runoff, subsurface flow); however, these methods are not available under an integrated platform and, individually, often require significant effort for implementation. Here we introduce SEPHYDRO, an open access, customizable web-based tool, which integrates 11 algorithms allowing for separation of streamflow hydrographs. The streamlined interface incorporates a reference guide as well as additional information that allows users to import their own data, customize the algorithms, and compare, visualise and export results. The tool includes one-, two- and three-parameter digital filters as well as graphical separation methods and has been successfully applied in Atlantic Canada, in studies dealing with nutrient loading to fresh water and coastal water ecosystems. Future developments include integration of additional separation algorithms as well as incorporation of geochemical separation methods. SEPHYDRO has been developed through a collaborative research effort between the Canadian Rivers Institute, University of New Brunswick (Fredericton, New Brunswick, Canada), Agriculture and Agri-Food Canada and Environment and Climate Change Canada and is currently available at http://canadianriversinstitute.com/tool/
Concentration and separation of biological organisms by ultrafiltration and dielectrophoresis
Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.
2010-10-12
Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.
Sour pressure swing adsorption process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhadra, Shubhra Jyoti; Wright, Andrew David; Hufton, Jeffrey Raymond
Methods and apparatuses for separating CO.sub.2 and sulfur-containing compounds from a synthesis gas obtained from gasification of a carbonaceous feedstock. The primary separating steps are performed using a sour pressure swing adsorption (SPSA) system, followed by an acid gas enrichment system and a sulfur removal unit. The SPSA system includes multiple pressure equalization steps and a rinse step using a rinse gas that is supplied from a source other than directly from one of the adsorber beds of the SPSA system.
High-performance liquid chromatography of oligoguanylates at high pH
NASA Technical Reports Server (NTRS)
Stribling, R.; Deamer, D. (Principal Investigator)
1991-01-01
Because of the stable self-structures formed by oligomers of guanosine, standard high-performance liquid chromatography techniques for oligonucleotide fractionation are not applicable. Previously, oligoguanylate separations have been carried out at pH 12 using RPC-5 as the packing material. While RPC-5 provides excellent separations, there are several limitations, including the lack of a commercially available source. This report describes a new anion-exchange high-performance liquid chromatography method using HEMA-IEC BIO Q, which successfully separates different forms of the guanosine monomer as well as longer oligoguanylates. The reproducibility and stability at high pH suggests a versatile role for this material.
Blind separation of incoherent and spatially disjoint sound sources
NASA Astrophysics Data System (ADS)
Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter
2016-11-01
Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.
NASA Astrophysics Data System (ADS)
Liu, Lingling; Li, Chenxi; Zhao, Huijuan; Yi, Xi; Gao, Feng; Meng, Wei; Lu, Yiming
2014-03-01
Radiance is sensitive to the variations of tissue optical parameters, such as absorption coefficient μa, scattering coefficient μs, and anisotropy factor g. Therefore, similar to fluence, radiance can be used for tissue characterization. Compared with fluence, radiance has the advantage of offering the direction information of light intensity. Taking such advantage, the optical parameters can be determined by rotating the detector through 360 deg with only a single optode pair. Instead of the translation mode used in the fluence-based technologies, the Rotation mode has less invasiveness in the clinical diagnosis. This paper explores a new method to obtain the optical properties by measuring the distribution of light intensity in liquid phantom with only a single optode pair and the detector rotation through 360 deg. The angular radiance and distance-dependent radiance are verified by comparing experimental measurement data with Monte Carlo (MC) simulation for the short source-detector separations and diffusion approximation for the large source-detector separations. Detecting angular radiance with only a single optode pair under a certain source-detection separation will present a way for prostate diagnose and light dose calculation during the photon dynamic therapy (PDT).
SPRINKLER IRRIGATION AS A VOC SEPARATION AND DISPOSAL METHOD
Sprinkler irrigation is a common farming practice in those states where the semi-arid climate and lack of sufficient rainfall during critical growing periods necessitate the use of supplemental water. The source of most irrigation water is groundwater which can be contaminated wi...
NASA Astrophysics Data System (ADS)
Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping
2017-06-01
Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.
Chai, Rifai; Naik, Ganesh R; Nguyen, Tuan Nghia; Ling, Sai Ho; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-05-01
This paper presents a two-class electroencephal-ography-based classification for classifying of driver fatigue (fatigue state versus alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction, and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8%, and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor), and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC = 0.93) against other methods such as power spectral density as feature extractor (AUC-ROC = 0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Background Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Methods Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18–45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. Results 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05). Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Conclusion Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress. PMID:27124768
Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J
2013-05-01
Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
Billeci, Lucia; Varanini, Maurizio
2017-01-01
The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. PMID:28509860
Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.
Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying
2016-04-01
As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.
The radioisotope complex project “RIC-80” at the Petersburg Nuclear Physics Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panteleev, V. N., E-mail: vnp@pnpi.spb.ru; Barzakh, A. E.; Batist, L. Kh.
2015-12-15
The high current cyclotron C-80 capable of producing 40-80 MeV proton beams with a current of up to 200 μA has been constructed at Petersburg Nuclear Physics Institute. One of the main goals of the C-80 is the production of a wide spectrum of medical radionuclides for diagnostics and therapy. The project development of the radioisotope complex RIC-80 (radioisotopes at the cyclotron C-80) at the beam of C-80 has been completed. The RIC-80 complex is briefly discussed in this paper. The combination of the mass-separator with the target-ion source device, available at one of the new target stations for on-linemore » or semi on-line production of a high purity separated radioisotopes, is explored in greater detail. The results of target and ion source tests for a mass-separator method for the production of high purity radioisotopes {sup 82}Sr and {sup 223,224}Ra are also presented.« less
The radioisotope complex project "RIC-80" at the Petersburg Nuclear Physics Institute.
Panteleev, V N; Barzakh, A E; Batist, L Kh; Fedorov, D V; Ivanov, V S; Moroz, F V; Molkanov, P L; Orlov, S Yu; Volkov, Yu M
2015-12-01
The high current cyclotron C-80 capable of producing 40-80 MeV proton beams with a current of up to 200 μA has been constructed at Petersburg Nuclear Physics Institute. One of the main goals of the C-80 is the production of a wide spectrum of medical radionuclides for diagnostics and therapy. The project development of the radioisotope complex RIC-80 (radioisotopes at the cyclotron C-80) at the beam of C-80 has been completed. The RIC-80 complex is briefly discussed in this paper. The combination of the mass-separator with the target-ion source device, available at one of the new target stations for on-line or semi on-line production of a high purity separated radioisotopes, is explored in greater detail. The results of target and ion source tests for a mass-separator method for the production of high purity radioisotopes (82)Sr and (223,224)Ra are also presented.
Bonoli, Matteo; Montanucci, Marina; Gallina Toschi, Tullia; Lercker, Giovanni
2003-09-05
Olive oil is the main source of fat in the Mediterranean diet, and its consumption has been related to a low incidence of coronary heart disease and certain cancers. Recent findings demonstrate that olive oil phenolics are powerful in vitro and in vivo antioxidants and display other biological activities that could partially account for the observed healthful effects of the Mediterranean diet. A detailed method optimization plan was carried out to separate the most popular phenols in olive oil for four separation parameters: buffer concentration, buffer pH, applied voltage and temperature. Consequently, an analytical method capable of separating 21 different phenols and polyphenols by capillary zone electrophoresis was developed; the separation was performed within 10 min, using a 40 cm x 50 microm capillary, with a 45 mM sodium tetraborate buffer (pH 9.60), at 27 kV and 30 degrees C. The optimized method was applied to methanolic extracts of several Italian extra-virgin olive oils obtained by different technologies in order to characterize and to compare their antioxidant profile. Positive correlations of phenolic compounds found by capillary zone electrophoresis (CZE) and two colorimetric indexes (total polyphenols and o-diphenols) were found and discussed.
Reconstruction and separation of vibratory field using structural holography
NASA Astrophysics Data System (ADS)
Chesnais, C.; Totaro, N.; Thomas, J.-H.; Guyader, J.-L.
2017-02-01
A method for reconstructing and separating vibratory field on a plate-like structure is presented. The method, called "Structural Holography" is derived from classical Near-field Acoustic Holography (NAH) but in the vibratory domain. In this case, the plate displacement is measured on one-dimensional lines (the holograms) and used to reconstruct the entire two-dimensional displacement field. As a consequence, remote measurements on non directly accessible zones are possible with Structural Holography. Moreover, as it is based on the decomposition of the field into forth and back waves, Structural Holography permits to separate forces in the case of multi-sources excitation. The theoretical background of the Structural Holography method is described first. Then, to illustrate the process and the possibilities of Structural Holography, the academic test case of an infinite plate excited by few point forces is presented. With the principle of vibratory field separation, the displacement fields produced by each point force separately is reconstructed. However, the displacement field is not always meaningful and some additional treatments are mandatory to localize the position of point forces for example. From the simple example of an infinite plate, a post-processing based on the reconstruction of the structural intensity field is thus proposed. Finally, Structural Holography is generalized to finite plates and applied to real experimental measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, T
Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of {sup 125}I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x{sub 0}. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit tomore » the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x{sub 1}, x{sub 2}) is applied to the difference of charges measured at the two plate separations x{sub 1} and x{sub 2}. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x{sub 1} and x{sub 2}, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.« less
Illuminant color estimation based on pigmentation separation from human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2015-03-01
Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.
NASA Astrophysics Data System (ADS)
Yuan, Bin; Shao, Min; de Gouw, Joost; Parrish, David D.; Lu, Sihua; Wang, Ming; Zeng, Limin; Zhang, Qian; Song, Yu; Zhang, Jianbo; Hu, Min
2012-12-01
Volatile organic compounds (VOCs) were measured online at an urban site in Beijing in August-September 2010. Diurnal variations of various VOC species indicate that VOCs concentrations were influenced by photochemical removal with OH radicals for reactive species and secondary formation for oxygenated VOCs (OVOCs). A photochemical age-based parameterization method was applied to characterize VOCs chemistry. A large part of the variability in concentrations of both hydrocarbons and OVOCs was explained by this method. The determined emission ratios of hydrocarbons to acetylene agreed within a factor of two between 2005 and 2010 measurements. However, large differences were found for emission ratios of some alkanes and C8 aromatics between Beijing and northeastern United States secondary formation from anthropogenic VOCs generally contributed higher percentages to concentrations of reactive aldehydes than those of inert ketones and alcohols. Anthropogenic primary emissions accounted for the majority of ketones and alcohols concentrations. Positive matrix factorization (PMF) was also used to identify emission sources from this VOCs data set. The four resolved factors were three anthropogenic factors and a biogenic factor. However, the anthropogenic factors are attributed here to a common source at different stages of photochemical processing rather than three independent sources. Anthropogenic and biogenic sources of VOCs concentrations were not separated completely in PMF. This study indicates that photochemistry of VOCs in the atmosphere complicates the information about separated sources that can be extracted from PMF and the influence of photochemical processing must be carefully considered in the interpretation of source apportionment studies based upon PMF.
Improvements to Passive Acoustic Tracking Methods for Marine Mammal Monitoring
2016-05-02
separate and associate calls from individual animals . Marine mammal; Passive acoustic monitoring; Localization; Tracking; Multiple source; Sparse array...position and hydrophone timing offset in addition to animal position Almost all marine mammal tracking methods treat animal position as the only unknown...Workshop on Detection, Classification and Localization (DCL) of Marine Mammals). The animals were expected to be relatively close to the surface
VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)
NASA Astrophysics Data System (ADS)
Andrews, J. J.; Chaname, J.; Agueros, M. A.
2017-11-01
Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).
NASA Astrophysics Data System (ADS)
Gyalay, S.; Vogt, M.; Withers, P.
2015-12-01
Previous studies have mapped locations from the magnetic equator to the ionosphere in order to understand how auroral features relate to magnetospheric sources. Vogt et al. (2011) in particular mapped equatorial regions to the ionosphere by using a method of flux equivalence—requiring that the magnetic flux in a specified region at the equator is equal to the magnetic flux in the region to which it maps in the ionosphere. This is preferred to methods relying on tracing field lines from global Jovian magnetic field models, which are inaccurate beyond 30 Jupiter radii from the planet. That previous study produced a two-dimensional model—accounting for changes with radial distance and local time—of the normal component of the magnetic field in the equatorial region. However, this two-dimensional fit—which aggregated all equatorial data from Pioneer 10, Pioneer 11, Voyager 1, Voyager 2, Ulysses, and Galileo—did not account for temporal variability resulting from changing solar wind conditions. Building off of that project, this study aims to map the Jovian aurora to the magnetosphere for two separate cases: with a nominal magnetosphere, and with a magnetosphere compressed by high solar wind dynamic pressure. Using the Michigan Solar Wind Model (mSWiM) to predict the solar wind conditions upstream of Jupiter, intervals of high solar wind dynamic pressure were separated from intervals of low solar wind dynamic pressure—thus creating two datasets of magnetometer measurements to be used for two separate 2D fits, and two separate mappings.
Blind source separation of ex-vivo aorta tissue multispectral images
Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson
2015-01-01
Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366
Method and apparatus for separating mixtures of gases using an acoustic wave
Geller, Drew A.; Swift, Gregory W.; Backhaus, Scott N.
2004-05-11
A thermoacoustic device separates a mixture of gases. An elongated duct is provided with first and second ends and has a length that is greater than the wavelength of sound in the mixture of gases at a selected frequency, and a diameter that is greater than a thermal penetration depth in the mixture of gases. A first acoustic source is located at the first end of the duct to generate acoustic power at the selected frequency. A plurality of side branch acoustic sources are spaced along the length of the duct and are configured to introduce acoustic power into the mixture of gases so that a first gas is concentrated at the first end of the duct and a second gas is concentrated at the second end of the duct.
METHOD FOR THE RECOVERY OF CESIUM VALUES
Rimshaw, S.J.
1960-02-16
A method is given for recovering Cs/sup 137/ from radioactive waste solutions together with extraneous impurities. Ammonium alum is precipitated in the waste solution. The alum, which carries the cesium, is separated from the supernatant liquid and then dissolved in water. The resulting aqueous solution is then provided with a source of hydroxyl ions, which precipitates aluminum as the hydroxide, and the aluminum hydroxide is separated from the resulting liquid. This liquid, which contains anionic impurities together with ammonium and cesium, is passed through an anion exchange resin bed which removes the anionic impurities. The ammonium in the effluent is removed by destructive distiilation, leaving a substantiaily pure cesium salt in the effluent.
Production and study of radionuclides at the research institute of atomic reactors (NIIAR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karelin, E.A.; Gordeev, Y.N.; Filimonov, V.T.
1995-01-01
The main works of the Radionuclide Sources and Preparations Department (ORIP) of the Research Institute of Atomic Reactors (NIIAR) are summarized. The major activity of the Radionuclide Sources and Preparations Department (ORIP) is aimed at production of radioactive preparations of trans-plutonium elements (TPE) and also of lighter elements (from P to Ir), manufacture of ionizing radiation sources thereof, and scientific research to develop new technologies. One of the radionuclides that recently has received major attention is gadolinium-153. Photon sources based on it are used in densimeters for diagnostics of bone deseases. The procedure for separating gadolinium and europium, which ismore » currently used at the Research Institute of Atomic Reactors (NILAR), is based on europium cementation with the use of sodium amalgam. The method, though efficient, did not until recently permit an exhaustive removal of radioactive europium from {sup 153}Gd. The authors have thoroughly studied the separation process in semi-countercurrent mode, using citrate solutions. A special attention was given to the composition of europium complex species.« less
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
NASA Astrophysics Data System (ADS)
Sturtz, Timothy M.
Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.
Optical element for full spectral purity from IR-generated EUV light sources
NASA Astrophysics Data System (ADS)
van den Boogaard, A. J. R.; Louis, E.; van Goor, F. A.; Bijkerk, F.
2009-03-01
Laser produced plasma (LLP) sources are generally considered attractive for high power EUV production in next generation lithography equipment. Such plasmas are most efficiently excited by the relatively long, infrared wavelengths of CO2-lasers, but a significant part of the rotational-vibrational excitation lines of the CO2 radiation will be backscattered by the plasma's critical density surface and consequently will be present as parasitic radiation in the spectrum of such sources. Since most optical elements in the EUV collecting and imaging train have a high reflection coefficient for IR radiation, undesirable heating phenomena at the resist level are likely to occur. In this study a completely new principle is employed to obtain full separation of EUV and IR radiation from the source by a single optical component. While the application of a transmission filter would come at the expense of EUV throughput, this technique potentially enables wavelength separation without loosing reflectance compared to a conventional Mo/Si multilayer coated element. As a result this method provides full spectral purity from the source without loss in EUV throughput. Detailed calculations on the principal of functioning are presented.
Vehicle routing for the eco-efficient collection of household plastic waste.
Bing, Xiaoyun; de Keizer, Marlies; Bloemhof-Ruwaard, Jacqueline M; van der Vorst, Jack G A J
2014-04-01
Plastic waste is a special category of municipal solid waste. Plastic waste collection is featured with various alternatives of collection methods (curbside/drop-off) and separation methods (source-/post-separation). In the Netherlands, the collection routes of plastic waste are the same as those of other waste, although plastic is different than other waste in terms of volume to weight ratio. This paper aims for redesigning the collection routes and compares the collection options of plastic waste using eco-efficiency as performance indicator. Eco-efficiency concerns the trade-off between environmental impacts, social issues and costs. The collection problem is modeled as a vehicle routing problem. A tabu search heuristic is used to improve the routes. Collection alternatives are compared by a scenario study approach. Real distances between locations are calculated with MapPoint. The scenario study is conducted based on real case data of the Dutch municipality Wageningen. Scenarios are designed according to the collection alternatives with different assumptions in collection method, vehicle type, collection frequency and collection points, etc. Results show that the current collection routes can be improved in terms of eco-efficiency performance by using our method. The source-separation drop-off collection scenario has the best performance for plastic collection assuming householders take the waste to the drop-off points in a sustainable manner. The model also shows to be an efficient decision support tool to investigate the impacts of future changes such as alternative vehicle type and different response rates. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Feature selection from hyperspectral imaging for guava fruit defects detection
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Tan, Sou Ching
2017-06-01
Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.
Pre-stack separation of PP and split PS waves in HTI media
NASA Astrophysics Data System (ADS)
Lu, Jun; Wang, Yun; Yang, Yuyong; Chen, Jingyi
2017-07-01
Separation of PP and split PS waves in transversely isotropic media with a horizontal axis of symmetry is crucial for imaging subsurface targets and for fracture prediction in a multicomponent seismic survey using P-wave sources. In conventional multicomponent processing, when a low velocity zone is present near the surface, it is often assumed that the vertical Z-component mainly records P modes and that the horizontal X- and Y-components record S modes, including split PS waves. However, this assumption does not hold when the ubiquitous presence of azimuthal anisotropy makes near surface velocity structures more complicated. Seismic wavefields recorded in each component therefore generally represent a complex waveform formed by PP and split PS waves, seriously distorting velocity analysis and seismic imaging. Most previous studies on wave separation have tended to separate P and S modes using pre-stack data and to separate split S modes using post-stack sections, under the assumption of orthogonal polarization. However, split S modes can hardly maintain their original orthogonal polarizations during propagation to the surface due to stratigraphic heterogeneity. Here, without assuming orthogonal polarization, we present a method for pre-stack separation of PP, PS1 and PS2 waves using all three components. The core of our method is the rotation of wave vectors from the Cartesian coordinate system established by Z-, R- and T-axes to a coordinate system established by the true PP-, PS1- and PS2-wave vector directions. Further, we propose a three-component superposition approach to obtain base wave vectors for the coordinate system transformation. Synthetic data testing results confirm that the performance of our wave separation method is stable under different noise levels. Application to field data from Southwest China reveals the potential of our proposed method.
Delayed bunching for multi-reflection time-of-flight mass separation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenbusch, M.; Marx, G.; Schweikhard, L.
2015-06-29
Many experiments are handicapped when the ion sources do not only deliver the ions of interest but also contaminations, i.e., unwanted ions of similar mass. In the recent years, multi-reflection time-of-flight mass separation has become a promising method to isolate the ions of interest from the contaminants, in particular for measurements with low-energy short-lived nuclides. To further improve the performance of multi-reflection mass separators with respect to the limitations by space-charge effects, the simultaneously trapped ions are spatially widely distributed in the apparatus. Thus, the ions can propagate with reduced Coulomb interactions until, finally, they are bunched by a changemore » in the trapping conditions for high-resolution mass separation. Proof-of-principle measurements are presented.« less
Riedewald, Frank; Goode, Kieran; Sexton, Aidan; Sousa-Gallagher, Maria J
2016-01-01
Every year about 1.5 billion tyres are discarded worldwide representing a large amount of solid waste, but also a largely untapped source of raw materials. The objective of the method was to prove the concept of a novel scrap tyre recycling process which uses molten zinc as the direct heat transfer fluid and, simultaneously, uses this media to separate the solids products (i.e. steel and rCB) in a sink-float separation at an operating temperature of 450-470 °C. This methodology involved: •construction of the laboratory scale batch reactor,•separation of floating rCB from the zinc,•recovery of the steel from the bottom of the reactor following pyrolysis.
Gent, Malcolm; Sierra, Héctor Muñiz; Menéndez, Mario; de Cos Juez, Francisco Javier
2018-01-01
Viable recycled residual plastic (RP) product(s) must be of sufficient quality to be reusable as a plastic or source of hydrocarbons or fuel. The varied composition and large volumes of such wastes usually requires a low cost, high through-put recycling method(s) to eliminate contaminants. Cyclone separation of plastics by density is proposed as a potential method of achieving separations of specific types of plastics. Three ground calcite separation medias of different grain size distributions were tested in a cylindrical cyclone to evaluate density separations at 1.09, 1.18 and 1.27 g/cm 3 . The differences in separation recoveries obtained with these medias by density offsets produced due to displacement of separation media solid particles within the cyclone caused by centrifugal settling is evaluated. The separation density at which 50% of the material of that density is recovered was found to increase from 0.010 to 0.026 g/cm 3 as the separation media density increased from 1.09 to 1.27 g/cm 3 . All separation medias were found to have significantly low Ep 95 values of 0.012-0.033 g/cm 3 . It is also demonstrated that the presence of an excess content of <10 µm calcite media particles (>75%) resulted in reduced separation efficiencies. It is shown that the optimum separations were achieved when the media density offset was 0.03-0.04 g/cm 3 . It is shown that effective heavy media cyclone separations of RP denser than 1.0 g/cm 3 can produce three sets of mixed plastics containing: PS and ABS/SAN at densities of >1.0-1.09 g/cm 3 ; PC, PMMA at a density of 1.09-1.18 g/cm 3 ; and PVC and PET at a density of >1.27 g/cm 3 . Copyright © 2017 Elsevier Ltd. All rights reserved.
Panayotou, Nicholas F.; Green, Donald R.; Price, Larry S.
1985-01-01
A method of and apparatus for heating test specimens to desired elevated temperatures for irradiation by a high energy neutron source. A furnace assembly is provided for heating two separate groups of specimens to substantially different, elevated, isothermal temperatures in a high vacuum environment while positioning the two specimen groups symmetrically at equivalent neutron irradiating positions.
Panayotou, N.F.; Green, D.R.; Price, L.S.
A method of and apparatus for heating test specimens to desired elevated temperatures for irradiation by a high energy neutron source. A furnace assembly is provided for heating two separate groups of specimens to substantially different, elevated, isothermal temperatures in a high vacuum environment while positioning the two specimen groups symmetrically at equivalent neutron irradiating positions.
Treatment of gas from an in situ conversion process
Diaz, Zaida [Katy, TX; Del Paggio, Alan Anthony [Spring, TX; Nair, Vijay [Katy, TX; Roes, Augustinus Wilhelmus Maria [Houston, TX
2011-12-06
A method of producing methane is described. The method includes providing formation fluid from a subsurface in situ conversion process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. At least the olefins in the first gas stream are contacted with a hydrogen source in the presence of one or more catalysts and steam to produce a second gas stream. The second gas stream is contacted with a hydrogen source in the presence of one or more additional catalysts to produce a third gas stream. The third gas stream includes methane.
Phansalkar, Rasika S; Nam, Joo-Won; Chen, Shao-Nong; McAlpine, James B; Leme, Ariene A; Aydin, Berdan; Bedran-Russo, Ana-Karina; Pauli, Guido F
2018-02-02
Proanthocyanidins (PACs) find wide applications for human use including food, cosmetics, dietary supplements, and pharmaceuticals. The chemical complexity associated with PACs has triggered the development of various chromatographic techniques, with countercurrent separation (CCS) gaining in popularity. This study applied the recently developed DESIGNER (Depletion and Enrichment of Select Ingredients Generating Normalized Extract Resources) approach for the selective enrichment of trimeric and tetrameric PACs using centrifugal partition chromatography (CPC). This CPC method aims at developing PAC based biomaterials, particularly for their application in restoring and repairing dental hard tissue. A general separation scheme beginning with the depletion of polymeric PACs, followed by the removal of monomeric flavan-3-ols and a final enrichment step produced PAC trimer and tetramer enriched fractions. A successful application of this separation scheme is demonstrated for four polyphenol rich plant sources: grape seeds, pine bark, cinnamon bark, and cocoa seeds. Minor modifications to the generic DESIGNER CCS method were sufficient to accommodate the varying chemical complexities of the individual source materials. The step-wise enrichment of PAC trimers and tetramers was monitored using normal phase TLC and Diol-HPLC-UV analyses. CPC proved to be a reliable tool for the selective enrichment of medium size oligomeric PACs (OPACs). This method plays a key role in the development of dental biomaterials considering its reliability and reproducibility, as well as its scale-up capabilities for possible larger-scale manufacturing. Copyright © 2017 Elsevier B.V. All rights reserved.
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less
Chiral perturbation theory and nucleon-pion-state contaminations in lattice QCD
NASA Astrophysics Data System (ADS)
Bär, Oliver
2017-05-01
Multiparticle states with additional pions are expected to be a non-negligible source of excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory can be employed to calculate the contamination due to two-particle nucleon-pion-states in various nucleon observables. Leading order results are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions (quark momentum fraction, helicity and transversity moment). Taking into account phenomenological results for the charges and moments the impact of the nucleon-pion-states on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution results in an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. The source-sink separations accessible in contemporary lattice simulations are found to be too small for chiral perturbation theory to be directly applicable.
NASA Technical Reports Server (NTRS)
Padial, N.; Csanak, G.; Mckoy, B. V.; Langhoff, P. W.
1981-01-01
Vertical-electronic static-exchange photoexcitation and ionization cross sections are reported which provide a first approximation to the complete dipole spectrum of CO2. Separated-channel static-exchange calculations of vertical-electronic transition energies and oscillator strengths, and Stieltjes-Chebyshev moment methods were used in the development. Detailed comparisons were made of the static-exchange excitation and ionization spectra with photoabsorption, electron-impact excitation, and quantum-defect estimates of discrete transition energies and intensities, and with partial-channel photoionization cross sections obtained from fluorescence measurements and from tunable-source and (e, 2e) photoelectron spectroscopy. Results show that the separate-channel static-exchange approximation is generally satisfactory in CO2.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Powerline noise elimination in biomedical signals via blind source separation and wavelet analysis.
Akwei-Sekyere, Samuel
2015-01-01
The distortion of biomedical signals by powerline noise from recording biomedical devices has the potential to reduce the quality and convolute the interpretations of the data. Usually, powerline noise in biomedical recordings are extinguished via band-stop filters. However, due to the instability of biomedical signals, the distribution of signals filtered out may not be centered at 50/60 Hz. As a result, self-correction methods are needed to optimize the performance of these filters. Since powerline noise is additive in nature, it is intuitive to model powerline noise in a raw recording and subtract it from the raw data in order to obtain a relatively clean signal. This paper proposes a method that utilizes this approach by decomposing the recorded signal and extracting powerline noise via blind source separation and wavelet analysis. The performance of this algorithm was compared with that of a 4th order band-stop Butterworth filter, empirical mode decomposition, independent component analysis and, a combination of empirical mode decomposition with independent component analysis. The proposed method was able to expel sinusoidal signals within powerline noise frequency range with higher fidelity in comparison with the mentioned techniques, especially at low signal-to-noise ratio.
Barricklow, Jason; Ryder, Tim F; Furlong, Michael T
2009-08-01
During LC-MS/MS quantification of a small molecule in human urine samples from a clinical study, an unexpected peak was observed to nearly co-elute with the analyte of interest in many study samples. Improved chromatographic resolution revealed the presence of at least 3 non-analyte peaks, which were identified as cysteine metabolites and N-acetyl (mercapturic acid) derivatives thereof. These metabolites produced artifact responses in the parent compound MRM channel due to decomposition in the ionization source of the mass spectrometer. Quantitative comparison of the analyte concentrations in study samples using the original chromatographic method and the improved chromatographic separation method demonstrated that the original method substantially over-estimated the analyte concentration in many cases. The substitution of electrospray ionization (ESI) for atmospheric pressure chemical ionization (APCI) nearly eliminated the source instability of these metabolites, which would have mitigated their interference in the quantification of the analyte, even without chromatographic separation. These results 1) demonstrate the potential for thiol metabolite interferences during the quantification of small molecules in pharmacokinetic samples, and 2) underscore the need to carefully evaluate LC-MS/MS methods for molecules that can undergo metabolism to thiol adducts to ensure that they are not susceptible to such interferences during quantification.
Lin, Lei; Liu, Xinyue; Zhang, Fuming; Chi, Lianli; Amster, I Jonathan; Leach, Franklyn E; Xia, Qiangwei; Linhardt, Robert J
2017-01-01
Most hyphenated analytical approaches that rely on liquid chromatography-MS require relatively long separation times, produce incomplete resolution of oligosaccharide mixtures, use eluents that are incompatible with electrospray ionization, or require oligosaccharide derivatization. Here we demonstrate the analysis of heparin oligosaccharides, including disaccharides, ultralow molecular weight heparin, and a low molecular weight heparin, using a novel electrokinetic pump-based CE-MS coupling eletrospray ion source. Reverse polarity CE separation and negative-mode electrospray ionization were optimized using a volatile methanolic ammonium acetate electrolyte and sheath fluid. The online CE hyphenated negative-ion electrospray ionization MS on an LTQ Orbitrap mass spectrometer was useful in disaccharide compositional analysis and bottom-up and top-down analysis of low molecular weight heparin. The application of this CE-MS method to ultralow molecular heparin suggests that a charge state distribution and the low level of sulfate group loss that is achieved make this method useful for online tandem MS analysis of heparins. Graphical abstract Most hyphenated analytical approaches that rely on liquid chromatography-MS require relatively long separation times, produce incomplete resolution of oligosaccharide mixtures, use eluents that are incompatible with electrospray ionization, or require oligosaccharide derivatization. Here we demonstrate the analysis of heparin oligosaccharides, including disaccharides, ultralow molecular weight heparin, and a low molecular weight heparin, using a novel electrokinetic pump-based CE-MS coupling eletrospray ion source. Reverse polarity CE separation and negative-mode electrospray ionization were optimized using a volatile methanolic ammonium acetate electrolyte and sheath fluid. The online CE hyphenated negative-ion electrospray ionization MS on an LTQ Orbitrap mass spectrometer was useful in disaccharide compositional analysis and bottom-up and top-down analysis of low molecular weight heparin. The application of this CE-MS method to ultralow molecular heparin suggests that a charge state distribution and the low level of sulfate group loss that is achieved make this method useful for online tandem MS analysis of heparins.
A small-plane heat source method for measuring the thermal conductivities of anisotropic materials
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yue, Kai; Wang, Jun; Zhang, Xinxin
2017-07-01
A new small-plane heat source method was proposed in this study to simultaneously measure the in-plane and cross-plane thermal conductivities of anisotropic insulating materials. In this method the size of the heat source element is smaller than the sample size and the boundary condition is thermal insulation due to no heat flux at the edge of the sample during the experiment. A three-dimensional model in a rectangular coordinate system was established to exactly describe the heat transfer process of the measurement system. Using the Laplace transform, variable separation, and Laplace inverse transform methods, the analytical solution of the temperature rise of the sample was derived. The temperature rises calculated by the analytical solution agree well with the results of numerical calculation. The result of the sensitivity analysis shows that the sensitivity coefficients of the estimated thermal conductivities are high and uncorrelated to each other. At room temperature and in a high-temperature environment, experimental measurements of anisotropic silica aerogel were carried out using the traditional one-dimensional plane heat source method and the proposed method, respectively. The results demonstrate that the measurement method developed in this study is effective and feasible for simultaneously obtaining the in-plane and cross-plane thermal conductivities of the anisotropic materials.
NASA Astrophysics Data System (ADS)
Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.
2013-12-01
The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
A method for monitoring nuclear absorption coefficients of aviation fuels
NASA Technical Reports Server (NTRS)
Sprinkle, Danny R.; Shen, Chih-Ping
1989-01-01
A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.
Tasci, Tonguc O; Johnson, William P; Fernandez, Diego P; Manangon, Eliana; Gale, Bruce K
2014-10-24
Compared to other sub-techniques of field flow fractionation (FFF), cyclical electrical field flow fractionation (CyElFFF) is a relatively new method with many opportunities remaining for improvement. One of the most important limitations of this method is the separation of particles smaller than 100nm. For such small particles, the diffusion rate becomes very high, resulting in severe reductions in the CyElFFF separation efficiency. To address this limitation, we modified the electrical circuitry of the ElFFF system. In all earlier ElFFF reports, electrical power sources have been directly connected to the ElFFF channel electrodes, and no alteration has been made in the electrical circuitry of the system. In this work, by using discrete electrical components, such as resistors and diodes, we improved the effective electric field in the system to allow high resolution separations. By modifying the electrical circuitry of the ElFFF system, high resolution separations of 15 and 40nm gold nanoparticles were achieved. The effects of applying different frequencies, amplitudes and voltage shapes have been investigated and analyzed through experiments. Copyright © 2014 Elsevier B.V. All rights reserved.
Turboprop IDEAL: a motion-resistant fat-water separation technique.
Huo, Donglai; Li, Zhiqiang; Aboussouan, Eric; Karis, John P; Pipe, James G
2009-01-01
Suppression of the fat signal in MRI is very important for many clinical applications. Multi-point water-fat separation methods, such as IDEAL (Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation), can robustly separate water and fat signal, but inevitably increase scan time, making separated images more easily affected by patient motions. PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and Turboprop techniques offer an effective approach to correct for motion artifacts. By combining these techniques together, we demonstrate that the new TP-IDEAL method can provide reliable water-fat separation with robust motion correction. The Turboprop sequence was modified to acquire source images, and motion correction algorithms were adjusted to assure the registration between different echo images. Theoretical calculations were performed to predict the optimal shift and spacing of the gradient echoes. Phantom images were acquired, and results were compared with regular FSE-IDEAL. Both T1- and T2-weighted images of the human brain were used to demonstrate the effectiveness of motion correction. TP-IDEAL images were also acquired for pelvis, knee, and foot, showing great potential of this technique for general clinical applications.
Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian
2017-03-15
Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Back-trajectory modeling of high time-resolution air measurement data to separate nearby sources
Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...
Source splitting via the point source method
NASA Astrophysics Data System (ADS)
Potthast, Roland; Fazi, Filippo M.; Nelson, Philip A.
2010-04-01
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119-40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731-42). The task is to separate the sound fields uj, j = 1, ..., n of n \\in \\mathbb {N} sound sources supported in different bounded domains G1, ..., Gn in \\mathbb {R}^3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + sdotsdotsdot + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g_1, \\ldots, g_n, n\\in \\mathbb {N} , to construct uell for ell = 1, ..., n from u|Λ in the form u_{\\ell }(x) = \\int _{\\Lambda } g_{\\ell,x}(y) u(y) {\\,\\rm d}s(y), \\qquad \\ell =1,\\ldots, n. We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Improved definition of crustal anomalies for Magsat data
NASA Technical Reports Server (NTRS)
1981-01-01
A scheme was developed for separating the portions of the magnetic field measured by the Magsat 1 satellite that arise from internal and external sources. To test this method, a set of sample coefficients were used to compute the field values along a simulated satellite orbit. This data was then used to try to recover the original coefficients. Matrix inversion and recursive least squares methods were used to solve for the input coefficients. The accuracy of the two methods are compared.
NASA Astrophysics Data System (ADS)
Titov, O. A.; Lopez, Yu. R.
2018-03-01
We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.
Characterization and identification of Na-Cl sources in ground water
Panno, S.V.; Hackley, Keith C.; Hwang, H.-H.; Greenberg, S.E.; Krapac, I.G.; Landsberger, S.; O'Kelly, D. J.
2006-01-01
Elevated concentrations of sodium (Na+) and chloride (Cl -) in surface and ground water are common in the United States and other countries, and can serve as indicators of, or may constitute, a water quality problem. We have characterized the most prevalent natural and anthropogenic sources of Na+ and Cl- in ground water, primarily in Illinois, and explored techniques that could be used to identify their source. We considered seven potential sources that included agricultural chemicals, septic effluent, animal waste, municipal landfill leachate, sea water, basin brines, and road deicers. The halides Cl-, bromide (Br-), and iodide (I-) were useful indicators of the sources of Na+-Cl- contamination. Iodide enrichment (relative to Cl-) was greatest in precipitation, followed by uncontaminated soil water and ground water, and landfill leachate. The mass ratios of the halides among themselves, with total nitrogen (N), and with Na+ provided diagnostic methods for graphically distinguishing among sources of Na+ and Cl- in contaminated water. Cl/Br ratios relative to Cl- revealed a clear, although overlapping, separation of sample groups. Samples of landfill leachate and ground water known to be contaminated by leachate were enriched in I- and Br-; this provided an excellent fingerprint for identifying leachate contamination. In addition, total N, when plotted against Cl/Br ratios, successfully separated water contaminated by road salt from water contaminated by other sources. Copyright ?? 2005 National Ground Water Association.
Multi-distance diffuse optical spectroscopy with a single optode via hypotrochoidal scanning.
Applegate, Matthew B; Roblyer, Darren
2018-02-15
Frequency-domain diffuse optical spectroscopy (FD-DOS) is an established technique capable of determining optical properties and chromophore concentrations in biological tissue. Most FD-DOS systems use either manually positioned, handheld probes or complex arrays of source and detector fibers to acquire data from many tissue locations, allowing for the generation of 2D or 3D maps of tissue. Here, we present a new method to rapidly acquire a wide range of source-detector (SD) separations by mechanically scanning a single SD pair. The source and detector fibers are mounted on a scan head that traces a hypotrochoidal pattern over the sample that, when coupled with a high-speed FD-DOS system, enables the rapid collection of dozens of SD separations for depth-resolved imaging. We demonstrate that this system has an average error of 4±2.6% in absorption and 2±1.8% in scattering across all SD separations. Additionally, by linearly translating the device, the size and location of an absorbing inhomogeneity can be determined through the generation of B-scan images in a manner conceptually analogous to ultrasound imaging. This work demonstrates the potential of single optode diffuse optical scanning for depth resolved visualization of heterogeneous biological tissues at near real-time rates.
Non-overlapped P- and S-wave Poynting vectors and their solution by the grid method
NASA Astrophysics Data System (ADS)
Lu, Yongming; Liu, Qiancheng
2018-06-01
The Poynting vector represents the local directional energy flux density of seismic waves in geophysics. It is widely used in elastic reverse time migration to analyze source illumination, suppress low-wavenumber noise, correct for image polarity and extract angle-domain common-image gathers. However, the P- and S-waves are mixed together during wavefield propagation so that the P and S energy fluxes are not clean everywhere, especially at the overlapped points. In this paper, we use a modified elastic-wave equation in which the P and S vector wavefields are naturally separated. Then, we develop an efficient method to evaluate the separable P and S Poynting vectors, respectively, based on the view that the group velocity and phase velocity have the same direction in isotropic elastic media. We furthermore formulate our method using an unstructured mesh-based modeling method named the grid method. Finally, we verify our method using two numerical examples.
Mixture Modeling for Background and Sources Separation in x-ray Astronomical Images
NASA Astrophysics Data System (ADS)
Guglielmetti, Fabrizia; Fischer, Rainer; Dose, Volker
2004-11-01
A probabilistic technique for the joint estimation of background and sources in high-energy astrophysics is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. The present analysis is applied to ROSAT PSPC data (0.1-2.4 keV) in Survey Mode. A background map is modelled using a Thin-Plate spline. Source probability maps are obtained for each pixel (45 arcsec) independently and for larger correlation lengths, revealing faint and extended sources. We will demonstrate that the described probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS) used for the production of the ROSAT All-Sky Survey (RASS) catalogues.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
Anderson, Kim A.; Szelewski, Michael J.; Wilson, Glenn; Quimby, Bruce D.; Hoffman, Peter D.
2015-01-01
We describe modified gas chromatography electron-impact/triple-quadrupole mass spectrometry (GC–EI/MS/MS) utilizing a newly developed hydrogen-injected self-cleaning ion source and modified 9 mm extractor lens. This instrument, with optimized parameters, achieves quantitative separation of 62 polycyclic aromatic hydrocarbons (PAHs). Existing methods historically limited rigorous identification and quantification to a small subset, such as the 16 PAHs the US EPA has defined as priority pollutants. Without the critical source and extractor lens modifications, the off-the-shelf GC–EI/MS/MS system was unsuitable for complex PAH analysis. Separations were enhanced by increased gas flow, a complex GC temperature profile incorporating multiple isothermal periods, specific ramp rates, and a PAH-optimized column. Typical determinations with our refined GC–EI/MS/MS have a large linear range of 1–10,000 pg μl−1 and detection limits of <2 pg μl−1. Included in the 62 PAHs, multiple-reaction-monitoring (MRM) mode enabled GC-EI/MS/MS identification and quantitation of several constituents of the MW 302 PAHs isomers. Using calibration standards, values determined were within 5% of true values over many months. Standard curve r2 values were typically >0.998, exceptional for compounds which are archetypally difficult. With this method benzo[a]fluorene, benzo[b]fluorene, benzo[c]fluorene were fully separated as was benzo[b]fluoranthene, benzo[k]fluoranthene, and benzo[j]fluoranthene. Chrysene and triphenylene, were sufficiently separated to allow accurate quantitation. Mean limits of detection (LODs) across all PAHs were 1.02 ± 0.84 pg μl−1 with indeno[1,2,3-c,d] pyrene having the lowest LOD at 0.26 pg μl−1 and only two analytes above 2.0 pg μl−1; acenaphthalene (2.33 pg μl−1) and dibenzo[a,e]pyrene (6.44 pg μl−1). PMID:26454790
NASA Astrophysics Data System (ADS)
Zhang, Bao-Ji; Zhang, Zhu-Xin
2015-09-01
To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
Supported liquid inorganic membranes for nuclear waste separation
Bhave, Ramesh R; DeBusk, Melanie M; DelCul, Guillermo D; Delmau, Laetitia H; Narula, Chaitanya K
2015-04-07
A system and method for the extraction of americium from radioactive waste solutions. The method includes the transfer of highly oxidized americium from an acidic aqueous feed solution through an immobilized liquid membrane to an organic receiving solvent, for example tributyl phosphate. The immobilized liquid membrane includes porous support and separating layers loaded with tributyl phosphate. The extracted solution is subsequently stripped of americium and recycled at the immobilized liquid membrane as neat tributyl phosphate for the continuous extraction of americium. The sequestered americium can be used as a nuclear fuel, a nuclear fuel component or a radiation source, and the remaining constituent elements in the aqueous feed solution can be stored in glassified waste forms substantially free of americium.
Dual-energy x-ray image decomposition by independent component analysis
NASA Astrophysics Data System (ADS)
Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang
2001-09-01
The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.
Vibration Based Sun Gear Damage Detection
NASA Technical Reports Server (NTRS)
Hood, Adrian; LaBerge, Kelsen; Lewicki, David; Pines, Darryll
2013-01-01
Seeded fault experiments were conducted on the planetary stage of an OH-58C helicopter transmission. Two vibration based methods are discussed that isolate the dynamics of the sun gear from that of the planet gears, bearings, input spiral bevel stage, and other components in and around the gearbox. Three damaged sun gears: two spalled and one cracked, serve as the focus of this current work. A non-sequential vibration separation algorithm was developed and the resulting signals analyzed. The second method uses only the time synchronously averaged data but takes advantage of the signal/source mapping required for vibration separation. Both algorithms were successful in identifying the spall damage. Sun gear damage was confirmed by the presence of sun mesh groups. The sun tooth crack condition was inconclusive.
Miller, Alexander; Hess, Julia Meredith; Bybee, Deborah; Goodkind, Jessica R
2018-01-01
Consistent evidence documents the negative impacts of family separation on refugee mental health and concerns for the welfare of distant family members and desire to reunite with family members as priorities for refugees postmigration. Less is known about refugees' emic perspectives on their experiences of family separation. Using mixed methods data from a community-based mental health intervention study, we found that family separation was a major source of distress for refugees and that it was experienced in a range of ways: as fear for family still in harm's way, as a feeling of helplessness, as cultural disruption, as the greatest source of distress since resettlement, and contributing to mixed emotions around resettlement. In addition to these qualitative findings, we used quantitative data to test the relative contribution of family separation to refugees' depression/anxiety symptoms, posttraumatic stress disorder (PTSD) symptoms, and psychological quality of life. Separation from a family member was significantly related to all 3 measures of mental health, and it explained significant additional variance in all 3 measures even after accounting for participants' overall level of trauma exposure. Relative to 26 other types of trauma exposure, family separation was 1 of only 2 traumatic experiences that explained additional variance in all 3 measures of mental health. Given the current global refugee crisis and the need for policies to address this large and growing issue, this research highlights the importance of considering the ways in which family separation impacts refugee mental health and policies and practices that could help ameliorate this ongoing stressor. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A Mixed Methods Study of Student College Experiences That Construct Racism
ERIC Educational Resources Information Center
Cash, Sheri F.
2017-01-01
Hardie and Tyson (2013) claim that the education institution has become a foundational source of social and political racism. Colleges and universities are microcosms of society with the potential to institute behavioral reform. Bonilla-Silva (2015) claims that Blacks and Whites continue a condition of separation while the inequality between the…
NASA Technical Reports Server (NTRS)
Pelevin, V. N.; Kozlyaninov, M. V.
1981-01-01
The problem of light fields in the ocean is in basic ocean optics. Twenty-six separate studies discuss: (1) the field of solar radiation in the ocean; (2) stationary and nonstationary light fields created in the sea by artificial sources; and (3) the use of optical methods to study biological and hydrodynamic characteristics of the sea.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
Life cycle assessment of a household solid waste source separation programme: a Swedish case study.
Bernstad, Anna; la Cour Jansen, Jes; Aspegren, Henrik
2011-10-01
The environmental impact of an extended property close source-separation system for solid household waste (i.e., a systems for collection of recyclables from domestic properties) is investigated in a residential area in southern Sweden. Since 2001, households have been able to source-separate waste into six fractions of dry recyclables and food waste sorting. The current system was evaluated using the EASEWASTE life cycle assessment tool. Current status is compared with an ideal scenario in which households display perfect source-separation behaviour and a scenario without any material recycling. Results show that current recycling provides substantial environmental benefits compared to a non-recycling alternative. The environmental benefit varies greatly between recyclable fractions, and the recyclables currently most frequently source-separated by households are often not the most beneficial from an environmental perspective. With optimal source-separation of all recyclables, the current net contribution to global warming could be changed to a net-avoidance while current avoidance of nutrient enrichment, acidification and photochemical ozone formation could be doubled. Sensitivity analyses show that the type of energy substituted by incineration of non-recycled waste, as well as energy used in recycling processes and in the production of materials substituted by waste recycling, is of high relevance for the attained results.
Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B
2018-01-01
Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.
Stripping ethanol from ethanol-blended fuels for use in NO.sub.x SCR
Kass, Michael Delos [Oak Ridge, TN; Graves, Ronald Lee [Knoxville, TN; Storey, John Morse Elliot [Oak Ridge, TN; Lewis, Sr., Samuel Arthur; Sluder, Charles Scott [Knoxville, TN; Thomas, John Foster [Powell, TN
2007-08-21
A method to use diesel fuel alchohol micro emulsions (E-diesel) to provide a source of reductant to lower NO.sub.x emissions using selective catalytic reduction. Ethanol is stripped from the micro emulsion and entered into the exhaust gasses upstream of the reducing catalyst. The method allows diesel (and other lean-burn) engines to meet new, lower emission standards without having to carry separate fuel and reductant tanks.
The Calculation of VOCs Diffusion Coefficient for Building Materials
NASA Astrophysics Data System (ADS)
Zhang, Xin; Deng, Quancai; Chen, Haijiang; Wu, Xiaoyun
2018-05-01
Volatile Organic Compounds (VOCS), as one of the major sources of air contaminations, has an important bearing on one’s general health. The adsorption capacity and velocity of the material for VOCs can be described separately using. In this paper, the detailed process and method of VOCs diffusion and partition coefficients by genetic algorithm is introduced, the algorithm is realized easily by computer program and the result by the method is precise and practical.
Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.
Minnehaha Creek Watershed SWMM5 Model Data Analysis and Future Recommendations
2013-07-01
comprehensive inventory of data inconsistencies without a source data inventory. To solve this problem, MCWD needs to develop a detailed, georeferenced, GIS...LMCW models, USACE recommends that MCWD keep the SWMM5 models separated instead of combining them into one comprehensive SWMM5 model for the entire...SWMM5 geometry. SWMM5 offers three routing methods: steady flow, kinematic wave, and dynamic wave. Each method offers advantages and disadvantages and
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
Separation of GRACE geoid time-variations using Independent Component Analysis
NASA Astrophysics Data System (ADS)
Frappart, F.; Ramillien, G.; Maisongrande, P.; Bonnet, M.
2009-12-01
Independent Component Analysis (ICA) is a blind separation method based on the simple assumptions of the independence of the sources and the non-Gaussianity of the observations. An approach based on this numerical method is used here to extract hydrological signals over land and oceans from the polluting striping noise due to orbit repetitiveness and present in the GRACE global mass anomalies. We took advantage of the availability of monthly Level-2 solutions from three official providers (i.e., CSR, JPL and GFZ) that can be considered as different observations of the same phenomenon. The efficiency of the methodology is first demonstrated on a synthetic case. Applied to one month of GRACE solutions, it allows to clearly separate the total water storage change from the meridional-oriented spurious gravity signals on the continents but not on the oceans. This technique gives results equivalent as the destriping method for continental water storage for the hydrological patterns with less smoothing. This methodology is then used to filter the complete series of the 2002-2009 GRACE solutions.
Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu
2016-01-01
This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271
Carotenoids from Foods of Plant, Animal and Marine Origin: An Efficient HPLC-DAD Separation Method.
Strati, Irini F; Sinanoglou, Vassilia J; Kora, Lintita; Miniadis-Meimaroglou, Sofia; Oreopoulou, Vassiliki
2012-12-19
Carotenoids are important antioxidant compounds, present in many foods of plant, animal and marine origin. The aim of the present study was to describe the carotenoid composition of tomato waste, prawn muscle and cephalothorax and avian (duck and goose) egg yolks through the use of a modified gradient elution HPLC method with a C 30 reversed-phase column for the efficient separation and analysis of carotenoids and their cis -isomers. Elution time was reduced from 60 to 45 min without affecting the separation efficiency. All- trans lycopene predominated in tomato waste, followed by all- trans -β-carotene, 13- cis -lutein and all- trans lutein, while minor amounts of 9- cis -lutein, 13- cis -β-carotene and 9- cis -β-carotene were also detected. Considering the above findings, tomato waste is confirmed to be an excellent source of recovering carotenoids, especially all- trans lycopene, for commercial use. Xanthophylls were the major carotenoids of avian egg yolks, all- trans lutein and all- trans zeaxanthin in duck and goose egg yolk, respectively. In the Penaeus kerathurus prawn, several carotenoids (zeaxanthin, all- trans -lutein, canthaxanthin, cryptoxanthin, optical and geometrical astaxanthin isomers) were identified in considerable amounts by the same method. A major advantage of this HPLC method was the efficient separation of carotenoids and their cis -isomers, originating from a wide range of matrices.
NASA Astrophysics Data System (ADS)
Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard
2018-06-01
The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.
Meng, Ying-ying; Feng, Cang; Li, Tian; Wang, Ling
2009-12-01
Dry-weather flow quantity and quality of three representative separate storm sewer systems in Shanghai-H, G, N were studied. Based on survey of operating status of the pumping stations as well as characteristics of the drainage systems, it was obtained that the interception sewage volumes per unit area in the three systems were 3610 m3/(km2 x d), 1550 m3/(km2 x d), 2970 m3/(km2 x d) respectively; the sanitary wastewater included accounted for 25%, 85% and 71% respectively; the interception volume of H was mainly composed of infiltrated underground water, so the dry-weather flow pollution was slighter, and the interception volumes of G, N were both mainly composed of sanitary wastewater, so the dry-weather which were flow pollution was relatively serious. The water characteristics of potential illicit discharge sources of dry-weather which were flow-grey water, black water and underground water were preliminarily explored, so that treating three parameters-LAS/ NH4+ -N, NH4+ -N/K, Mg/K as tracer parameters of grey water, black water and underground water was put forward. Moreover, the water characteristics of grey water and sanitary wastewater including black water were summarized: the feature of grey water was LAS/NH4+ -N > 0.2, NH4+ -N/K <1, and sanitary wastewater was LAS/NH4+ -N < 0.2, NH4+ -N/K >1. Based on the above, the applications of flow chart method and CMBM method in dry-weather flow detection of monitored storm systems were preliminarily discussed, and the results were basically same as that obtained in flow quantity and quality comprehensive analysis. The research results and methods can provide guidance for analysis and diagnosis of dry-weather flow sources and subsequent reconstruction projects in similar separate storm sewer systems at home.
Kawai, Kosuke; Huong, Luong Thi Mai
2017-03-01
Proper management of food waste, a major component of municipal solid waste (MSW), is needed, especially in developing Asian countries where most MSW is disposed of in landfill sites without any pretreatment. Source separation can contribute to solving problems derived from the disposal of food waste. An organic waste source separation and collection programme has been operated in model areas in Hanoi, Vietnam, since 2007. This study proposed three key parameters (participation rate, proper separation rate and proper discharge rate) for behaviour related to source separation of household organic waste, and monitored the progress of the programme based on the physical composition of household waste sampled from 558 households in model programme areas of Hanoi. The results showed that 13.8% of 558 households separated organic waste, and 33.0% discharged mixed (unseparated) waste improperly. About 41.5% (by weight) of the waste collected as organic waste was contaminated by inorganic waste, and one-third of the waste disposed of as organic waste by separators was inorganic waste. We proposed six hypothetical future household behaviour scenarios to help local officials identify a final or midterm goal for the programme. We also suggested that the city government take further actions to increase the number of people participating in separating organic waste, improve the accuracy of separation and prevent non-separators from discharging mixed waste improperly.
Extending compile-time reverse mode and exploiting partial separability in ADIFOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Separation negatives from Kodak film types SO-368 and SO-242
NASA Technical Reports Server (NTRS)
Weinstein, M. S.
1972-01-01
Two master resolution friskets were produced on Kodak film types SO-368 and SO-242. These target masters consisted of 21 density steps with three-bar resolution targets at five modulation levels within each step. The target masters were contact printed onto Kodak separation negative film, type 4131, using both a contact printing frame and enlarger as one method of exposure, and a Miller-Holzwarth contact printer as the other exposing device. Red, green, and blue Wratten filters were used to filter the exposing source. Tray processing was done with DK-50 developer diluted 1:2 at a temperature of 70 F. The resolution values were read for the SO-368 and SO-242 target masters, and the red, green, and blue separation negatives.
Method of treating waste water
Deininger, James P.; Chatfield, Linda K.
1995-01-01
A process of treating water to remove metal ion contaminants contained therein, said metal ion contaminants selected from the group consisting of metals in Groups 8, 1b, 2b, 4a, 5a, or 6a of the periodic table, lanthanide metals, and actinide metals including transuranic element metals, by adjusting the pH of a metal ion contaminant-containing water source to within the range of about 6.5 to about 14.0, admixing the water source with a mixture of an alkali or alkaline earth ferrate and a water soluble salt, e.g., a zirconium salt, in an amount sufficient to form a precipitate within the water source, the amount the mixture of ferrate and water soluble salt effective to reduce the metal ion contaminant concentration in the water source, permitting the precipitate in the admixture to separate and thereby yield a supernatant liquid having a reduced metal ion contaminant concentration, and separating the supernatant liquid having the reduced metal ion contaminant concentration from the admixture is provided. A composition of matter including an alkali or alkaline earth ferrate and a water soluble salt, e.g., a zirconium salt, is also provided.
Induced natural convection thermal cycling device
Heung, Leung Kit [Aiken, SC
2002-08-13
A device for separating gases, especially isotopes, by thermal cycling of a separation column using a pressure vessel mounted vertically and having baffled sources for cold and heat. Coils at the top are cooled with a fluid such as liquid nitrogen. Coils at the bottom are either electrical resistance coils or a tubular heat exchange. The sources are shrouded with an insulated "top hat" and simultaneously opened and closed at the outlets to cool or heat the separation column. Alternatively, the sources for cold and heat are mounted separately outside the vessel and an external loop is provided for each circuit.
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
46 CFR 129.395 - Radio installations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... INSTALLATIONS Power Sources and Distribution Systems § 129.395 Radio installations. A separate circuit, with... radios, if installed, may be powered from a local lighting power source, such as the pilothouse lighting panel, provided each radio power source has a separate overcurrent protection device. ...
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
NASA Astrophysics Data System (ADS)
Brewick, P. T.; Smyth, A. W.
2014-12-01
The accurate and reliable estimation of modal damping from output-only vibration measurements of structural systems is a continuing challenge in the fields of operational modal analysis (OMA) and system identification. In this paper a modified version of the blind source separation (BSS)-based Second-Order Blind Identification (SOBI) method was used to perform modal damping identification on a model bridge structure under varying loading conditions. The bridge model was created with finite elements and consisted of a series of stringer beams supported by a larger girder. The excitation was separated into two categories: ambient noise and traffic loads with noise modeled with random forcing vectors and traffic simulated with moving loads for cars and partially distributed moving masses for trains. The acceleration responses were treated as the mixed output signals for the BSS algorithm. The modified SOBI method used a windowing technique to maximize the amount of information used for blind identification from the responses. The modified SOBI method successfully found the mode shapes for both types of excitation with strong accuracy, but power spectral densities (PSDs) of the recovered modal responses showed signs of distortion for the traffic simulations. The distortion had an adverse affect on the damping ratio estimates for some of the modes but no correlation could be found between the accuracy of the damping estimates and the accuracy of the recovered mode shapes. The responses and their PSDs were compared to real-world collected data and patterns similar to distortion were observed implying that this issue likely affects real-world estimates.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens.
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-03-02
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens
NASA Astrophysics Data System (ADS)
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-03-01
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-01-01
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated. PMID:28252033
Bi-Xian, N I; Ming-Xue, S; Xiang-Zhen, X U; Xiao-Ting, W; Yang, D; Xiao-Lin, J
2017-05-17
Objective To know the contamination status of Giardia lamblia and Cryptosporidium in drinking water of Jiangsu Province, so as to provide the evidence for producing hygiene and safety drinking water. Methods A total of 28 water plants of 13 cities in Jiangsu Province were selected, and the source water (10 L), chlorinated water (100 L) and tap water (100 L) were collected separately in each site. The water samples were then treated by filtration, washing, centrifuging concentration, immune magnetic separation, and immunofluorescent assay, to detect the numbers of Giardia cysts and Cryptosporidium oocysts. Results Totally 84 samples from 13 cities were collected, including 28 source water, 28 chlorinated water and 28 tap water samples. Among the chlorinated water and tap water samples, no Giardia cysts and Cryptosporidium oocysts were found. However, Giardia cysts were detected in 3 (10.71%, 3/28) source water samples (Yancheng, Lianyungang, Changzhou cities), with the density of 1 cyst/10 L of all. Cryptosporidium oocysts were also detected in 3 (10.71%, 3/28) source water samples (Nanjing, Zhenjiang, Yangzhou cities), with the density of 1 oocyst/10 L of all. Conclusions The source water in partial areas of Jiangsu Province has been contaminated by Giardia and Cryptosporidium . To ensure the safety of drinking, the regulation of source water and surveillance of drinking water should be strengthened.
Improved Multiple-Species Cyclotron Ion Source
NASA Technical Reports Server (NTRS)
Soli, George A.; Nichols, Donald K.
1990-01-01
Use of pure isotope 86Kr instead of natural krypton in multiple-species ion source enables source to produce krypton ions separated from argon ions by tuning cylcotron with which source used. Addition of capability to produce and separate krypton ions at kinetic energies of 150 to 400 MeV necessary for simulation of worst-case ions occurring in outer space.
Sukholthaman, Pitchayanin; Sharp, Alice
2016-06-01
Municipal solid waste has been considered as one of the most immediate and serious problems confronting urban government in most developing and transitional economies. Providing solid waste performance highly depends on the effectiveness of waste collection and transportation process. Generally, this process involves a large amount of expenditures and has very complex and dynamic operational problems. Source separation has a major impact on effectiveness of waste management system as it causes significant changes in quantity and quality of waste reaching final disposal. To evaluate the impact of effective source separation on waste collection and transportation, this study adopts a decision support tool to comprehend cause-and-effect interactions of different variables in waste management system. A system dynamics model that envisages the relationships of source separation and effectiveness of waste management in Bangkok, Thailand is presented. Influential factors that affect waste separation attitudes are addressed; and the result of change in perception on waste separation is explained. The impacts of different separation rates on effectiveness of provided collection service are compared in six scenarios. 'Scenario 5' gives the most promising opportunities as 40% of residents are willing to conduct organic and recyclable waste separation. The results show that better service of waste collection and transportation, less monthly expense, extended landfill life, and satisfactory efficiency of the provided service at 60.48% will be achieved at the end of the simulation period. Implications of how to get public involved and conducted source separation are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Faust, James J; Doudrick, Kyle; Yang, Yu; Capco, David G; Westerhoff, Paul
2016-01-01
Recent studies indicate the presence of nano-scale titanium dioxide (TiO2) as an additive in human foodstuffs, but a practical protocol to isolate and separate nano-fractions from soluble foodstuffs as a source of material remains elusive. As such, we developed a method for separating the nano and submicron fractions found in commercial-grade TiO2 (E171) and E171 extracted from soluble foodstuffs and pharmaceutical products (e.g., chewing gum, pain reliever, and allergy medicine). Primary particle analysis of commercial-grade E171 indicated that 54% of particles were nano-sized (i.e., < 100 nm). Isolation and primary particle analysis of five consumer goods intended to be ingested revealed differences in the percent of nano-sized particles from 32%‒58%. Separation and enrichment of nano- and submicron-sized particles from commercial-grade E171 and E171 isolated from foodstuffs and pharmaceuticals was accomplished using rate-zonal centrifugation. Commercial-grade E171 was separated into nano- and submicron-enriched fractions consisting of a nano:submicron fraction of approximately 0.45:1 and 3.2:1, respectively. E171 extracted from gum had nano:submicron fractions of 1.4:1 and 0.19:1 for nano- and submicron-enriched, respectively. We show a difference in particle adhesion to the cell surface, which was found to be dependent on particle size and epithelial orientation. Finally, we provide evidence that E171 particles are not immediately cytotoxic to the Caco-2 human intestinal epithelium model. These data suggest that this separation method is appropriate for studies interested in isolating the nano-sized particle fraction taken directly from consumer products, in order to study separately the effects of nano and submicron particles.
Yang, Yu; Capco, David G.; Westerhoff, Paul
2016-01-01
Recent studies indicate the presence of nano-scale titanium dioxide (TiO2) as an additive in human foodstuffs, but a practical protocol to isolate and separate nano-fractions from soluble foodstuffs as a source of material remains elusive. As such, we developed a method for separating the nano and submicron fractions found in commercial-grade TiO2 (E171) and E171 extracted from soluble foodstuffs and pharmaceutical products (e.g., chewing gum, pain reliever, and allergy medicine). Primary particle analysis of commercial-grade E171 indicated that 54% of particles were nano-sized (i.e., < 100 nm). Isolation and primary particle analysis of five consumer goods intended to be ingested revealed differences in the percent of nano-sized particles from 32%‒58%. Separation and enrichment of nano- and submicron-sized particles from commercial-grade E171 and E171 isolated from foodstuffs and pharmaceuticals was accomplished using rate-zonal centrifugation. Commercial-grade E171 was separated into nano- and submicron-enriched fractions consisting of a nano:submicron fraction of approximately 0.45:1 and 3.2:1, respectively. E171 extracted from gum had nano:submicron fractions of 1.4:1 and 0.19:1 for nano- and submicron-enriched, respectively. We show a difference in particle adhesion to the cell surface, which was found to be dependent on particle size and epithelial orientation. Finally, we provide evidence that E171 particles are not immediately cytotoxic to the Caco-2 human intestinal epithelium model. These data suggest that this separation method is appropriate for studies interested in isolating the nano-sized particle fraction taken directly from consumer products, in order to study separately the effects of nano and submicron particles. PMID:27798677
[Detection of Heart Rate of Fetal ECG Based on STFT and BSS].
Wang, Xu; Cai, Kun
2016-01-01
Changes in heart rate of fetal is function regulating performance of the circulatory system and the central nervous system, it is significant to detect heart rate of fetus in perinatal fetal. This paper puts forward the fetal heart rate detection method based on short time Fourier transform and blind source separation. First of all, the mixed ECG signal was preprocessed, and then the wavelet transform technique was used to separate the fetal ECG signal with noise from mixed ECG signal, after that, the short-time Fourier transform and the blind separation were carried on it, and then calculated the correlation coefficient of it, Finally, An independent component that it has strongest correlation with the original signal was selected to make FECG peak detection and calculated the fetal instantaneous heart rate. The experimental results show that the method can improve the detection rate of the FECG peak (R), and it has high accuracy in fixing peak(R) location in the case of low signal-noise ratio.
System for recovery of daughter isotopes from a source material
Tranter, Troy J [Idaho Falls, ID; Todd, Terry A [Aberdeen, ID; Lewis, Leroy C [Idaho Falls, ID; Henscheid, Joseph P [Idaho Falls, ID
2009-08-04
A method of separating isotopes from a mixture containing at least two isotopes in a solution is disclosed. A first isotope is precipitated and is collected from the solution. A daughter isotope is generated and collected from the first isotope. The invention includes a method of producing an actinium-225/bismuth-213 product from a material containing thorium-229 and thorium-232. A solution is formed containing nitric acid and the material containing thorium-229 and thorium-232, and iodate is added to form a thorium iodate precipitate. A supernatant is separated from the thorium iodate precipitate and a second volume of nitric acid is added to the thorium iodate precipitate. The thorium iodate precipitate is stored and a decay product comprising actinium-225 and bismuth-213 is generated in the second volume of nitric acid, which is then separated from the thorium iodate precipitate, filtered, and treated using at least one chromatographic procedure. A system for producing an actinium-225/bismuth-213 product is also disclosed.
NASA Astrophysics Data System (ADS)
Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.
2015-09-01
A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.
Automated lung sound analysis for detecting pulmonary abnormalities.
Datta, Shreyasi; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan
2017-07-01
Identification of pulmonary diseases comprises of accurate auscultation as well as elaborate and expensive pulmonary function tests. Prior arts have shown that pulmonary diseases lead to abnormal lung sounds such as wheezes and crackles. This paper introduces novel spectral and spectrogram features, which are further refined by Maximal Information Coefficient, leading to the classification of healthy and abnormal lung sounds. A balanced lung sound dataset, consisting of publicly available data and data collected with a low-cost in-house digital stethoscope are used. The performance of the classifier is validated over several randomly selected non-overlapping training and validation samples and tested on separate subjects for two separate test cases: (a) overlapping and (b) non-overlapping data sources in training and testing. The results reveal that the proposed method sustains an accuracy of 80% even for non-overlapping data sources in training and testing.
METHOD OF PRODUCING HAFNIUM-FREE "CRYSTAL-BAR" ZIRCONIUM FROM A CRUDE SOURCE OF ZIRCONIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newnham, I.E.
1959-05-01
Production of Hf-free crystal-bar Zr is described. The crude source of Zr is carbided with carbon in a graphite resistance furnace, then treated with iodide to form ZrI/sub 4/. The ZrI/sub 4/ plus iodide impurities is heated with Zr powder in an evacuated vessel to 500 C. This causes ZrI/sub 3/ to form and be separated from the more volatile HfI/sub 4/ and AlI/sub 3/ which deposit on the condenserlid. The ZrI/sub 3/ is further separated from iodides of Fe, and V by heating to 350 C to re-form ZrI/sub 4/. The morevolatile ZrI/sub 4/ then is collected in puremore » form on another condenser-lid from which it can be removed and reduced to Zr. (T.R.H.)« less
Analysis and enhancement of astaxanthin accumulation in Haematococcus pluvialis.
Orosa, M; Franqueira, D; Cid, A; Abalde, J
2005-02-01
The green microalga Haematococcus pluvialis was cultured with different concentrations of NaNO(3) to determine the effect on cell growth and astaxanthin accumulation. The optimum nitrate concentration to obtain astaxanthin and to avoid the cessation of cell division was 0.15 g/l NaNO(3). The ratio chlorophyll a/total carotenoids proved a good physiological indicator of nitrogen deficiency in the cell. The effect of different carbon sources, malonate and acetate, on astaxanthin accumulation was also studied; up to 13 times more carotenoids per cell were accumulated in cultures with malonate than in cultures without this compound. The pigment analysis was performed by a new low toxicity HPLC method capable of separating chlorophylls a and b, carotenes and xanthophylls in a short-period of time, using low volumes of solvents and with an economical price. With this method even echinenone was separated, which had been unsuccessful by any other method.
A method for monitoring the variability in nuclear absorption characteristics of aviation fuels
NASA Technical Reports Server (NTRS)
Sprinkle, Danny R.; Shen, Chih-Ping
1988-01-01
A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Isotope separation by photodissociation of Van der Waal's molecules
Lee, Yuan T.
1977-01-01
A method of separating isotopes based on the dissociation of a Van der Waal's complex. A beam of molecules of a Van der Waal's complex containing, as one partner of the complex, a molecular species in which an element is present in a plurality of isotopes is subjected to radiation from a source tuned to a frequency which will selectively excite vibrational motion by a vibrational transition or through electronic transition of those complexed molecules of the molecular species which contain a desired isotope. Since the Van der Waal's binding energy is much smaller than the excitational energy of vibrational motion, the thus excited Van der Waal's complex dissociate into molecular components enriched in the desired isotope. The recoil velocity associated with vibrational to translational and rotational relaxation will send the separated molecules away from the beam whereupon the product enriched in the desired isotope can be separated from the constituents of the beam.
Liao, Wei; Hua, Xue-Ming; Zhang, Wang; Li, Fang
2014-05-01
In the present paper, the authors calculated the plasma's peak electron temperatures under different heat source separation distance in laser- pulse GMAW hybrid welding based on Boltzmann spectrometry. Plasma's peak electron densities under the corresponding conditions were also calculated by using the Stark width of the plasma spectrum. Combined with high-speed photography, the effect of heat source separation distance on electron temperature and electron density was studied. The results show that with the increase in heat source separation distance, the electron temperatures and electron densities of laser plasma did not changed significantly. However, the electron temperatures of are plasma decreased, and the electron densities of are plasma first increased and then decreased.
NASA Astrophysics Data System (ADS)
Clayton, Steven; Chupp, Tim; Cude-Woods, Christopher; Currie, Scott; Ito, Takeyasu; Liu, Chen-Yu; Long, Joshua; MacDonald, Stephen; Makela, Mark; O'Shaughnessy, Christopher; Plaster, Brad; Ramsey, John; Saunders, Andy; LANL nEDM Collaboration
2017-09-01
The Los Alamos National Laboratory ultracold neutron (UCN) source was recently upgraded for a factor of 5 improvement in stored density, providing the statistical precision needed for a room temperature neutron electric dipole moment measurement with sensitivity 3 ×10-27 e . cm, a factor 10 better than the limit set by the Sussex-RAL-ILL experiment. Here, we show results of a demonstration of Ramsey's separated oscillatory fields method on stored UCNs at the LANL UCN source and in a geometry relevant for a nEDM measurement. We argue a world-leading nEDM experiment could be performed at LANL with existing technology and a short lead time, providing a physics result with sensitivity intermediate between the current limit set by Sussex-RAL-ILL, and the anticipated limit from the complex, cryogenic nEDM experiment planned for the next decade at the ORNL Spallation Neutron Source (SNS-nEDM). This work was supported by the Los Alamos LDRD Program, Project 20140015DR.
External control of electron energy distributions in a dual tandem inductively coupled plasma
NASA Astrophysics Data System (ADS)
Liu, Lei; Sridhar, Shyam; Zhu, Weiye; Donnelly, Vincent M.; Economou, Demetre J.; Logue, Michael D.; Kushner, Mark J.
2015-08-01
The control of electron energy probability functions (EEPFs) in low pressure partially ionized plasmas is typically accomplished through the format of the applied power. For example, through the use of pulse power, the EEPF can be modulated to produce shapes not possible under continuous wave excitation. This technique uses internal control. In this paper, we discuss a method for external control of EEPFs by transport of electrons between separately powered inductively coupled plasmas (ICPs). The reactor incorporates dual ICP sources (main and auxiliary) in a tandem geometry whose plasma volumes are separated by a grid. The auxiliary ICP is continuously powered while the main ICP is pulsed. Langmuir probe measurements of the EEPFs during the afterglow of the main ICP suggests that transport of hot electrons from the auxiliary plasma provided what is effectively an external source of energetic electrons. The tail of the EEPF and bulk electron temperature were then elevated in the afterglow of the main ICP by this external source of power. Results from a computer simulation for the evolution of the EEPFs concur with measured trends.
NASA Astrophysics Data System (ADS)
Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir
2016-03-01
The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.
Yuan, Yalin; Yabe, Mitsuyasu
2014-01-01
A source separation program for household kitchen waste has been in place in Beijing since 2010. However, the participation rate of residents is far from satisfactory. This study was carried out to identify residents’ preferences based on an improved management strategy for household kitchen waste source separation. We determine the preferences of residents in an ad hoc sample, according to their age level, for source separation services and their marginal willingness to accept compensation for the service attributes. We used a multinomial logit model to analyze the data, collected from 394 residents in Haidian and Dongcheng districts of Beijing City through a choice experiment. The results show there are differences of preferences on the services attributes between young, middle, and old age residents. Low compensation is not a major factor to promote young and middle age residents accept the proposed separation services. However, on average, most of them prefer services with frequent, evening, plastic bag attributes and without instructor. This study indicates that there is a potential for local government to improve the current separation services accordingly. PMID:25546279
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
Aligned and Unaligned Coherence: A New Diagnostic Tool
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of combustion noise from turbofan engines has become important again as the noise from other sources like the fan and jet are reduced. A method has been developed to help identify combustion noise spectra using an aligned and unaligned coherence technique. When used with the well known three signal coherent power method and coherent power method it provides new information by separating tonal information from random process information. Examples are presented showing the underlying tonal structure which is buried under broadband noise and jet noise. The method is applied to data from a Pratt and Whitney PW4098 turbofan engine.
Blind source separation by sparse decomposition
NASA Astrophysics Data System (ADS)
Zibulevsky, Michael; Pearlmutter, Barak A.
2000-04-01
The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.
Fehr, M
2014-09-01
Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.
Yan, Xia; Wang, Li-Juan; Wu, Zhen; Wu, Yun-Long; Liu, Xiu-Xiu; Chang, Fang-Rong; Fang, Mei-Juan; Qiu, Ying-Kun
2016-10-15
Microbial metabolites represent an important source of bioactive natural products, but always exhibit diverse of chemical structures or complicated chemical composition with low active ingredients content. Traditional separation methods rely mainly on off-line combination of open-column chromatography and preparative high performance liquid chromatography (HPLC). However, the multi-step and prolonged separation procedure might lead to exposure to oxygen and structural transformation of metabolites. In the present work, a new two-dimensional separation workflow for fast isolation and analysis of microbial metabolites from Chaetomium globosum SNSHI-5, a cytotoxic fungus derived from extreme environment. The advantage of this analytical comprehensive two-dimensional liquid chromatography (2D-LC) lies on its ability to analyze the composition of the metabolites, and to optimize the separation conditions for the preparative 2D-LC. Furthermore, gram scale preparative 2D-LC separation of the crude fungus extract could be performed on a medium-pressure liquid chromatograph×preparative high-performance liquid chromatography system, under the optimized condition. Interestingly, 12 cytochalasan derivatives, including two new compounds named cytoglobosin Ab (3) and isochaetoglobosin Db (8), were successfully obtained with high purity in a short period of time. The structures of the isolated metabolites were comprehensively characterized by HR ESI-MS and NMR. To be highlighted, this is the first report on the combination of analytical and preparative 2D-LC for the separation of microbial metabolites. The new workflow exhibited apparent advantages in separation efficiency and sample treatment capacity compared with conventional methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of a test method for carbonyl compounds from stationary source emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhihua Fan; Peterson, M.R.; Jayanty, R.K.M.
1997-12-31
Carbonyl compounds have received increasing attention because of their important role in ground-level ozone formation. The common method used for the measurement of aldehydes and ketones is 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by high performance liquid chromatography and ultra violet (HPLC-UV) analysis. One of the problems associated with this method is the low recovery for certain compounds such as acrolein. This paper presents a study in the development of a test method for the collection and measurement of carbonyl compounds from stationary source emissions. This method involves collection of carbonyl compounds in impingers, conversion of carbonyl compounds to a stable derivativemore » with O-2,3,4,5,6-pentafluorobenzyl hydroxylamine hydrochloride (PFBHA), and separation and measurement by electron capture gas chromatography (GC-ECD). Eight compounds were selected for the evaluation of this method: formaldehyde, acetaldehyde, acrolein, acetone, butanal, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), and hexanal.« less
Quantification of sewer system infiltration using delta(18)O hydrograph separation.
Prigiobbe, V; Giulianelli, M
2009-01-01
The infiltration of parasitical water into two sewer systems in Rome (Italy) was quantified during a dry weather period. Infiltration was estimated using the hydrograph separation method with two water components and delta(18)O as a conservative tracer. The two water components were groundwater, the possible source of parasitical water within the sewer, and drinking water discharged into the sewer system. This method was applied at an urban catchment scale in order to test the effective water-tightness of two different sewer networks. The sampling strategy was based on an uncertainty analysis and the errors have been propagated using Monte Carlo random sampling. Our field applications showed that the method can be applied easily and quickly, but the error in the estimated infiltration rate can be up to 20%. The estimated infiltration into the recent sewer in Torraccia is 14% and can be considered negligible given the precision of the method, while the old sewer in Infernetto has an estimated infiltration of 50%.
[Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].
Long, Junbo; Wang, Haibin; Zha, Daifeng
2015-04-01
The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Porcaro, Camillo; Cottone, Carlo; Cancelli, Andrea; Salustri, Carlo; Tecchio, Franca
2018-04-01
High time resolution techniques are crucial for investigating the brain in action. Here, we propose a method to identify a section of the upper-limb motor area representation (FS_M1) by means of electroencephalographic (EEG) signals recorded during a completely passive condition (FS_M1bySS). We delivered a galvanic stimulation to the median nerve and we applied to EEG the semi-Blind Source Separation (s-BSS) algorithm named Functional Source Separation (FSS). In order to prove that FS_M1bySS is part of FS_M1, we also collected EEG in a motor condition, i.e. during a voluntary movement task (isometric handgrip) and in a rest condition, i.e. at rest with eyes open and closed. In motor condition, we show that the cortico-muscular coherence (CMC) of FS_M1bySS does not differ from FS_ M1 CMC (0.04 for both sources). Moreover, we show that the FS_M1bySS's ongoing whole band activity during Motor and both rest conditions displays high mutual information and time correlation with FS_M1 (above 0.900 and 0.800, respectively) whereas much smaller ones with the primary somatosensory cortex [Formula: see text] (about 0.300 and 0.500, [Formula: see text]). FS_M1bySS as a marker of the upper-limb FS_M1 representation obtainable without the execution of an active motor task is a great achievement of the FSS algorithm, relevant in most experimental, neurological and psychiatric protocols.
Electrospray ion source with reduced analyte electrochemistry
Kertesz, Vilmos [Knoxville, TN; Van Berkel, Gary [Clinton, TN
2011-08-23
An electrospray ion (ESI) source and method capable of ionizing an analyte molecule without oxidizing or reducing the analyte of interest. The ESI source can include an emitter having a liquid conduit, a working electrode having a liquid contacting surface, a spray tip, a secondary working electrode, and a charge storage coating covering partially or fully the liquid contacting surface of the working electrode. The liquid conduit, the working electrode and the secondary working electrode can be in liquid communication. The electrospray ion source can also include a counter electrode proximate to, but separated from, said spray tip. The electrospray ion source can also include a power system for applying a voltage difference between the working electrodes and a counter-electrode. The power system can deliver pulsed voltage changes to the working electrodes during operation of said electrospray ion source to minimize the surface potential of the charge storage coating.
Electrospray ion source with reduced analyte electrochemistry
Kertesz, Vilmos; Van Berkel, Gary J
2013-07-30
An electrospray ion (ESI) source and method capable of ionizing an analyte molecule without oxidizing or reducing the analyte of interest. The ESI source can include an emitter having a liquid conduit, a working electrode having a liquid contacting surface, a spray tip, a secondary working electrode, and a charge storage coating covering partially or fully the liquid contacting surface of the working electrode. The liquid conduit, the working electrode and the secondary working electrode can be in liquid communication. The electrospray ion source can also include a counter electrode proximate to, but separated from, said spray tip. The electrospray ion source can also include a power system for applying a voltage difference between the working electrodes and a counter-electrode. The power system can deliver pulsed voltage changes to the working electrodes during operation of said electrospray ion source to minimize the surface potential of the charge storage coating.
NASA Astrophysics Data System (ADS)
Bylyku, Elida
2009-04-01
In Albania in recent years it has been of increasing interest to determine various pollutants in the environment and their possible effects on human health. The radiochemical procedure used to identify Pu, Am, U, Th, and Sr radioisotopes in soil, sediment, water, coal, and milk samples is described. The analysis is carried out in the presence of respective tracer solutions and combines the procedure for Pu analysis based on anion exchange, the selective method for Sr isolation based on extraction chromatography using Sr-Spec resin, and the application of the TRU-Spec column for separation of Am fraction. An acid digestion method has been applied for the decomposition of samples. The radiochemical procedure involves the separation of Pu from Th, Am, and Sr by anion exchange, followed by the preconcentration of Am and Sr by coprecipitation with calcium oxalate. Am is separated from Sr by extraction chromatography. Uranium is separated from the bulk elements by liquid-liquid extraction using UTEVA® resin. Thin sources for alpha spectrometric measurements are prepared by microprecipitation with NdF3. Two International Atomic Energy Agency reference materials were analyzed in parallel with the samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Analysis of spectrally resolved autofluorescence images by support vector machines
NASA Astrophysics Data System (ADS)
Mateasik, A.; Chorvat, D.; Chorvatova, A.
2013-02-01
Spectral analysis of the autofluorescence images of isolated cardiac cells was performed to evaluate and to classify the metabolic state of the cells in respect to the responses to metabolic modulators. The classification was done using machine learning approach based on support vector machine with the set of the automatically calculated features from recorded spectral profile of spectral autofluorescence images. This classification method was compared with the classical approach where the individual spectral components contributing to cell autofluorescence were estimated by spectral analysis, namely by blind source separation using non-negative matrix factorization. Comparison of both methods showed that machine learning can effectively classify the spectrally resolved autofluorescence images without the need of detailed knowledge about the sources of autofluorescence and their spectral properties.
Coincident Detection Significance in Multimessenger Astronomy
NASA Astrophysics Data System (ADS)
Ashton, G.; Burns, E.; Dal Canton, T.; Dent, T.; Eggenstein, H.-B.; Nielsen, A. B.; Prix, R.; Was, M.; Zhu, S. J.
2018-06-01
We derive a Bayesian criterion for assessing whether signals observed in two separate data sets originate from a common source. The Bayes factor for a common versus unrelated origin of signals includes an overlap integral of the posterior distributions over the common-source parameters. Focusing on multimessenger gravitational-wave astronomy, we apply the method to the spatial and temporal association of independent gravitational-wave and electromagnetic (or neutrino) observations. As an example, we consider the coincidence between the recently discovered gravitational-wave signal GW170817 from a binary neutron star merger and the gamma-ray burst GRB 170817A: we find that the common-source model is enormously favored over a model describing them as unrelated signals.
Dong, Jun; Ni, Mingjiang; Chi, Yong; Zou, Daoan; Fu, Chao
2013-08-01
In China, the continuously increasing amount of municipal solid waste (MSW) has resulted in an urgent need for changing the current municipal solid waste management (MSWM) system based on mixed collection. A pilot program focusing on source-separated MSW collection was thus launched (2010) in Hangzhou, China, to lessen the related environmental loads. And greenhouse gas (GHG) emissions (Kyoto Protocol) are singled out in particular. This paper uses life cycle assessment modeling to evaluate the potential environmental improvement with regard to GHG emissions. The pre-existing MSWM system is assessed as baseline, while the source separation scenario is compared internally. Results show that 23 % GHG emissions can be decreased by source-separated collection compared with the base scenario. In addition, the use of composting and anaerobic digestion (AD) is suggested for further optimizing the management of food waste. 260.79, 82.21, and -86.21 thousand tonnes of GHG emissions are emitted from food waste landfill, composting, and AD, respectively, proving the emission reduction potential brought by advanced food waste treatment technologies. Realizing the fact, a modified MSWM system is proposed by taking AD as food waste substitution option, with additional 44 % GHG emissions saved than current source separation scenario. Moreover, a preliminary economic assessment is implemented. It is demonstrated that both source separation scenarios have a good cost reduction potential than mixed collection, with the proposed new system the most cost-effective one.
Stormflow-hydrograph separation based on isotopes: the thrill is gone--what's next?
Burns, Douglas A.
2002-01-01
Beginning in the 1970s, the promise of a new method for separatingstormflow hydrographs using18O,2H, and3Hprovedanirresistibletemptation, and was a vast improvement over graphical separationand solute tracer methods that were prevalent at the time. Eventu-ally, hydrologists realized that this new method entailed a plethoraof assumptions about temporal and spatial homogeneity of isotopiccomposition (many of which were commonly violated). Nevertheless,hydrologists forged ahead with dozens of isotope-based hydrograph-separation studies that were published in the 1970s and 1980s.Hortonian overland flow was presumed dead. By the late 1980s,the new isotope-based hydrograph separation technique had movedinto adolescence, accompanied by typical adolescent problems suchas confusion and a search for identity. As experienced hydrologistscontinued to use the isotope technique to study stormflow hydrol-ogy in forested catchments in humid climates, their younger peersfollowed obligingly—again and again. Was Hortonian overland flowreally dead and forgotten, though? What about catchments in whichpeople live and work? And what about catchments in dry climatesand the tropics? How useful were study results when several of theassumptions about the homogeneity of source waters were commonlyviolated? What if two components could not explain the variation ofisotopic composition measured in the stream during stormflow? Andwhat about uncertainty? As with many new tools, once the initialshine wore off, the limitations of the method became a concern—oneof which was that isotope-based hydrograph separations alone couldnot reveal much about the flow paths by which water arrives at astream channel during storms.
Classifying the embedded young stellar population in Perseus and Taurus and the LOMASS database
NASA Astrophysics Data System (ADS)
Carney, M. T.; Yıldız, U. A.; Mottram, J. C.; van Dishoeck, E. F.; Ramchandani, J.; Jørgensen, J. K.
2016-02-01
Context. The classification of young stellar objects (YSOs) is typically done using the infrared spectral slope or bolometric temperature, but either can result in contamination of samples. More accurate methods to determine the evolutionary stage of YSOs will improve the reliability of statistics for the embedded YSO population and provide more robust stage lifetimes. Aims: We aim to separate the truly embedded YSOs from more evolved sources. Methods: Maps of HCO+J = 4-3 and C18O J = 3-2 were observed with HARP on the James Clerk Maxwell Telescope (JCMT) for a sample of 56 candidate YSOs in Perseus and Taurus in order to characterize the presence and morphology of emission from high density (ncrit > 106 cm-3) and high column density gas, respectively. These are supplemented with archival dust continuum maps observed with SCUBA on the JCMT and Herschel PACS to compare the morphology of the gas and dust in the protostellar envelopes. The spatial concentration of HCO+J = 4-3 and 850 μm dust emission are used to classify the embedded nature of YSOs. Results: Approximately 30% of Class 0+I sources in Perseus and Taurus are not Stage I, but are likely to be more evolved Stage II pre-main sequence (PMS) stars with disks. An additional 16% are confused sources with an uncertain evolutionary stage. Outflows are found to make a negligible contribution to the integrated HCO+ intensity for the majority of sources in this study. Conclusions: Separating classifications by cloud reveals that a high percentage of the Class 0+I sources in the Perseus star forming region are truly embedded Stage I sources (71%), while the Taurus cloud hosts a majority of evolved PMS stars with disks (68%). The concentration factor method is useful to correct misidentified embedded YSOs, yielding higher accuracy for YSO population statistics and Stage timescales. Current estimates (0.54 Myr) may overpredict the Stage I lifetime on the order of 30%, resulting in timescales down to 0.38 Myr for the embedded phase.
Unidentified Gamma-Ray Sources: Hunting Gamma-Ray Blazars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massaro, F.; D'Abrusco, R.; Tosti, G.
2012-04-02
One of the main scientific objectives of the ongoing Fermi mission is unveiling the nature of the unidentified {gamma}-ray sources (UGSs). Despite the large improvements of Fermi in the localization of {gamma}-ray sources with respect to the past {gamma}-ray missions, about one third of the Fermi-detected objects are still not associated to low energy counterparts. Recently, using the Wide-field Infrared Survey Explorer (WISE) survey, we discovered that blazars, the rarest class of Active Galactic Nuclei and the largest population of {gamma}-ray sources, can be recognized and separated from other extragalactic sources on the basis of their infrared (IR) colors. Basedmore » on this result, we designed an association method for the {gamma}-ray sources to recognize if there is a blazar candidate within the positional uncertainty region of a generic {gamma}-ray source. With this new IR diagnostic tool, we searched for {gamma}-ray blazar candidates associated to the UGS sample of the second Fermi {gamma}-ray catalog (2FGL). We found that our method associates at least one {gamma}-ray blazar candidate as a counterpart each of 156 out of 313 UGSs analyzed. These new low-energy candidates have the same IR properties as the blazars associated to {gamma}-ray sources in the 2FGL catalog.« less
UNIDENTIFIED {gamma}-RAY SOURCES: HUNTING {gamma}-RAY BLAZARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massaro, F.; Ajello, M.; D'Abrusco, R.
2012-06-10
One of the main scientific objectives of the ongoing Fermi mission is unveiling the nature of unidentified {gamma}-ray sources (UGSs). Despite the major improvements of Fermi in the localization of {gamma}-ray sources with respect to the past {gamma}-ray missions, about one-third of the Fermi-detected objects are still not associated with low-energy counterparts. Recently, using the Wide-field Infrared Survey Explorer survey, we discovered that blazars, the rarest class of active galactic nuclei and the largest population of {gamma}-ray sources, can be recognized and separated from other extragalactic sources on the basis of their infrared (IR) colors. Based on this result, wemore » designed an association method for the {gamma}-ray sources to recognize if there is a blazar candidate within the positional uncertainty region of a generic {gamma}-ray source. With this new IR diagnostic tool, we searched for {gamma}-ray blazar candidates associated with the UGS sample of the second Fermi {gamma}-ray LAT catalog (2FGL). We found that our method associates at least one {gamma}-ray blazar candidate as a counterpart to each of 156 out of 313 UGSs analyzed. These new low-energy candidates have the same IR properties as the blazars associated with {gamma}-ray sources in the 2FGL catalog.« less
Techniques for the conversion to carbon dioxide of oxygen from dissolved sulfate in thermal waters
Nehring, N.L.; Bowen, P.A.; Truesdell, A.H.
1977-01-01
The fractionation of oxygen isotopes between dissolved sulfate ions and water provides a useful geothermometer for geothermal waters. The oxygen isotope composition of dissolved sulfate may also be used to indicate the source of the sulfate and processes of formation. The methods described here for separation, purification and reduction of sulfate to prepare carbon dioxide for mass spectrometric analysis are modifications of methods by Rafter (1967), Mizutani (1971), Sakai and Krouse (1971), and Mizutani and Rafter (1969). ?? 1976.
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Identifying organic nitrogen compounds in Rocky Mountain National Park aerosols
NASA Astrophysics Data System (ADS)
Beem, K. B.; Desyaterik, Y.; Ozel, M. Z.; Hamilton, J. F.; Collett, J. L.
2010-12-01
Nitrogen deposition is an important issue in Rocky Mountain National Park (RMNP). While inorganic nitrogen contributions to the ecosystems in this area have been studied, the sources of organic nitrogen are still largely unknown. To better understand the potential sources of organic nitrogen, filter samples were collected and analyzed for organic nitrogen species. Samples were collected in RMNP using a Thermo Fisher Scientific TSP (total suspended particulate) high-volume sampler with a PM2.5 impactor plate from April - November of 2008. The samples presented the opportunity to compare two different methods for identification of individual organic nitrogen species. The first type of analysis was performed with a comprehensive two dimensional gas chromatography (GCxGC) system using a nitrogen chemiluminescence detector (NCD). The filter samples were spiked with propanil in dichloromethane to use as an internal standard and were then extracted in water followed by solid phase extraction. The GCxGC system was comprised of a volatility based separation (DB5 column) followed by a polarity based separation (RXI-17 column). A NCD was used to specifically detect nitrogen compounds and remove the complex background matrix. Individual standards were used to identify peaks by comparing retention times. This method has the added benefit of an equimolar response for nitrogen so only a single calibration is needed for all species. In the second analysis, a portion of the same filter samples were extracted in DI water and analyzed with liquid chromatography coupled with mass spectroscopy (LC/MS). The separation was performed using a C18 column and a water-methanol gradient elution. Electrospray ionization into a time of flight mass spectrometer was used for detection. High accuracy mass measurement allowed unambiguous assignments of elemental composition of resulting ions. Positive and negative polarities were used since amines tend to show up in positive mode and nitrates in negative. The differences in the number of species and what species are identified between these two methods are important for planning future analyses of organic nitrogen compounds. In addition, these data provide new insight into the potential source of organic nitrogen in RMNP. Using the GCxGC method, 39 organic nitrogen species were detected and 20 were identified. Identified species include several types of amines and phenols. The LC/MS method identified several types of cresols, amines, and nitrates.
Ammonia producing engine utilizing oxygen separation
Easley, Jr., William Lanier; Coleman, Gerald Nelson [Petersborough, GB; Robel, Wade James [Peoria, IL
2008-12-16
A power system is provided having a power source, a first power source section with a first intake passage and a first exhaust passage, a second power source section with a second intake passage and a second exhaust passage, and an oxygen separator. The second intake passage may be fluidly isolated from the first intake passage.
NASA Technical Reports Server (NTRS)
Wolgemuth, D. J.; Gizang-Ginsberg, E.; Engelmyer, E.; Gavin, B. J.; Ponzetto, C.
1985-01-01
The use of a self-contained unit-gravity cell separation apparatus for separation of populations of mouse testicular cells is described. The apparatus, a Celsep (TM), maximizes the unit area over which sedimentation occurs, reduces the amount of separation medium employed, and is quite reproducible. Cells thus isolated have been good sources for isolation of DNA, and notably, high molecular weight RNA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim Ghani, Wan Azlina Wan Ab., E-mail: wanaz@eng.upm.edu.my; Rusli, Iffah Farizan, E-mail: iffahrusli@yahoo.com; Biak, Dayang Radiah Awang, E-mail: dayang@eng.upm.edu.my
Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designingmore » campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public’s involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes.« less
Core-shifts and proper-motion constraints in the S5 polar cap sample at the 15 and 43 GHz bands
NASA Astrophysics Data System (ADS)
Abellán, F. J.; Martí-Vidal, I.; Marcaide, J. M.; Guirado, J. C.
2018-06-01
We have studied a complete radio sample of active galactic nuclei with the very-long-baseline-interferometry (VLBI) technique and for the first time successfully obtained high-precision phase-delay astrometry at Q band (43 GHz) from observations acquired in 2010. We have compared our astrometric results with those obtained with the same technique at U band (15 GHz) from data collected in 2000. The differences in source separations among all the source pairs observed in common at the two epochs are compatible at the 1σ level between U and Q bands. With the benefit of quasi-simultaneous U and Q band observations in 2010, we have studied chromatic effects (core-shift) at the radio source cores with three different methods. The magnitudes of the core-shifts are of the same order (about 0.1 mas) for all methods. However, some discrepancies arise in the orientation of the core-shifts determined through the different methods. In some cases these discrepancies are due to insufficient signal for the method used. In others, the discrepancies reflect assumptions of the methods and could be explained by curvatures in the jets and departures from conical jets.
Probabilistic drug connectivity mapping
2014-01-01
Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle
Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-01-01
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814
Spectral analysis methods for vehicle interior vibro-acoustics identification
NASA Astrophysics Data System (ADS)
Hosseini Fouladi, Mohammad; Nor, Mohd. Jailani Mohd.; Ariffin, Ahmad Kamal
2009-02-01
Noise has various effects on comfort, performance and health of human. Sound are analysed by human brain based on the frequencies and amplitudes. In a dynamic system, transmission of sound and vibrations depend on frequency and direction of the input motion and characteristics of the output. It is imperative that automotive manufacturers invest a lot of effort and money to improve and enhance the vibro-acoustics performance of their products. The enhancement effort may be very difficult and time-consuming if one relies only on 'trial and error' method without prior knowledge about the sources itself. Complex noise inside a vehicle cabin originated from various sources and travel through many pathways. First stage of sound quality refinement is to find the source. It is vital for automotive engineers to identify the dominant noise sources such as engine noise, exhaust noise and noise due to vibration transmission inside of vehicle. The purpose of this paper is to find the vibro-acoustical sources of noise in a passenger vehicle compartment. The implementation of spectral analysis method is much faster than the 'trial and error' methods in which, parts should be separated to measure the transfer functions. Also by using spectral analysis method, signals can be recorded in real operational conditions which conduce to more consistent results. A multi-channel analyser is utilised to measure and record the vibro-acoustical signals. Computational algorithms are also employed to identify contribution of various sources towards the measured interior signal. These achievements can be utilised to detect, control and optimise interior noise performance of road transport vehicles.
Zou, Yonghong; Wang, Lixia; Christensen, Erik R
2015-10-01
This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). Copyright © 2015. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodds, W. K.; Collins, S. M.; Hamilton, S. K.
Analyses of 21 15N stable isotope tracer experiments, designed to examine food web dynamics in streams around the world, indicated that the isotopic composition of food resources assimilated by primary consumers (mostly invertebrates) poorly reflected the presumed food sources. Modeling indicated that consumers assimilated only 33–50% of the N available in sampled food sources such as decomposing leaves, epilithon, and fine particulate detritus over feeding periods of weeks or more. Thus, common methods of sampling food sources consumed by animals in streams do not sufficiently reflect the pool of N they assimilate. Lastly, Isotope tracer studies, combined with modeling andmore » food separation techniques, can improve estimation of N pools in food sources that are assimilated by consumers.« less
Dodds, W. K.; Collins, S. M.; Hamilton, S. K.; ...
2014-10-01
Analyses of 21 15N stable isotope tracer experiments, designed to examine food web dynamics in streams around the world, indicated that the isotopic composition of food resources assimilated by primary consumers (mostly invertebrates) poorly reflected the presumed food sources. Modeling indicated that consumers assimilated only 33–50% of the N available in sampled food sources such as decomposing leaves, epilithon, and fine particulate detritus over feeding periods of weeks or more. Thus, common methods of sampling food sources consumed by animals in streams do not sufficiently reflect the pool of N they assimilate. Lastly, Isotope tracer studies, combined with modeling andmore » food separation techniques, can improve estimation of N pools in food sources that are assimilated by consumers.« less
NASA Astrophysics Data System (ADS)
Xu, B.
2017-12-01
Interferometric Synthetic Aperture Radar (InSAR) has the advantages of high spatial resolution which enable measure line of sight (LOS) surface displacements with nearly complete spatial continuity and a satellite's perspective that permits large areas view of Earth's surface quickly and efficiently. However, using InSAR to observe long wavelength and small magnitude deformation signals is still significantly limited by various unmodeled errors sources i.e. atmospheric delays, orbit induced errors, Digital Elevation Model (DEM) errors. Independent component analysis (ICA) is a probabilistic method for separating linear mixed signals generated by different underlying physical processes.The signal sources which form the interferograms are statistically independent both in space and in time, thus, they can be separated by ICA approach.The seismic behavior in the Los Angeles Basin is active and the basin has experienced numerous moderate to large earthquakes since the early Pliocene. Hence, understanding the seismotectonic deformation in the Los Angeles Basin is important for analyzing seismic behavior. Compare with the tectonic deformations, nontectonic deformations due to groundwater and oil extraction may be mainly responsible for the surface deformation in the Los Angeles basin. Using the small baseline subset (SBAS) InSAR method, we extracted the surface deformation time series in the Los Angeles basin with a time span of 7 years (September 27, 2003-September 25,2010). Then, we successfully separate the atmospheric noise from InSAR time series and detect different processes caused by different mechanisms.
The study of the plasma jets of lead and silver simulating spent nuclear fuel components
NASA Astrophysics Data System (ADS)
Antonov, N. N.; Gavrikov, A. V.; Smirnov, V. P.; Liziakin, G. D.; Usmanov, R. A.; Vorona, N. A.; Timirkhanov, R. A.
2018-01-01
One of the tasks that must be solved to develop a spent nuclear fuel (SNF) plasma separation method is a creation of plasma source of substances simulating SNF components. Plasma of the diffuse arc discharge in a magnetic field with an incandescent cathode was considered in this paper, as such source. The discharge was initiated in a model substances vapor (lead and silver). Evaporation was carried out by crucible induction heating. Current- voltage characteristics of the discharge were obtained. Spectral analysis of the plasma jets radiation and double probe characteristics measurements in the area behind the anode were carried out. The minimum potential difference between the anode and cathode reached a value of about 7 V at current of about 1 A. When the potential difference in the discharge gap was close to 30 V (4.5 A) and 10 V (5.2 A) electron temperature in the plasma jet was 5-7 eV and 1-3 eV, respectively. Plasma density in jets took the value from 1011 cm-3 to 1012 cm-3. The obtained results indicate the possibility of using this type of discharge for the SNF plasma separation method approbation.
HEALTH TECHNOLOGY ASSESSMENT OF MEDICAL DEVICES IN EUROPE: PROCESSES, PRACTICES, AND METHODS.
Fuchs, Sabine; Olberg, Britta; Panteli, Dimitra; Busse, Reinhard
2016-01-01
To review and compare current Health Technology Assessment (HTA) activities for medical devices (MDs) across European HTA institutions. A comprehensive approach was adopted to identify institutions involved in HTA in European countries. We systematically searched institutional Web sites and other online sources by using a structured tool to extract information on the role and link to decision making, structure, scope, process, methodological approach, and available HTA reports for each included institution. Information was obtained from eighty-four institutions, forty-seven of which were analyzed. Fifty-four methodological documents from twenty-three agencies in eighteen countries were identified. Only five agencies had separate documents for the assessment of MDs. A few agencies made separate provisions for the assessment of MDs in their general methods. The amount of publicly available HTA reports on MDs varied by device category and agency remit. Despite growing consensus on their importance and international initiatives, such as the EUnetHTA Core Model®, specific tools for the assessment of MDs are rarely developed and implemented at the national level. Separate additional signposts incorporated in existing general methods guides may be sufficient for the evaluation of MDs.
NASA Astrophysics Data System (ADS)
Panin, V. Y.; Aykac, M.; Casey, M. E.
2013-06-01
The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.
No-search algorithm for direction of arrival estimation
NASA Astrophysics Data System (ADS)
Tuncer, T. Engin; Ã-Zgen, M. Tankut
2009-10-01
Direction of arrival estimation (DOA) is an important problem in ionospheric research and electromagnetics as well as many other fields. When superresolution techniques are used, a computationally expensive search should be performed in general. In this paper, a no-search algorithm is presented. The idea is to separate the source signals in the time-frequency plane by using the Short-Time Fourier Transform. The direction vector for each source is found by coherent summation over the instantaneous frequency (IF) tracks of the individual sources which are found automatically by employing morphological image processing. Both overlapping and nonoverlapping source IF tracks can be processed and identified by the proposed approach. The CLEAN algorithm is adopted in order to isolate the IF tracks of the overlapping sources with different powers. The proposed method is very effective in finding the IF tracks and can be applied for signals with arbitrary IF characteristics. While the proposed method can be applied to any sensor geometry, planar uniform circular arrays (UCA) bring additional advantages. Different properties of the UCA are presented, and it is shown that the DOA angles can be found as the mean-square error optimum solution of a linear matrix equation. Several simulations are done, and it is shown that the proposed approach performs significantly better than the conventional methods.
Improving Reliability of a Residency Interview Process
Serres, Michelle L.; Gundrum, Todd E.
2013-01-01
Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209
Single-trial event-related potential extraction through one-unit ICA-with-reference
NASA Astrophysics Data System (ADS)
Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
Single-trial event-related potential extraction through one-unit ICA-with-reference.
Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
Development of target ion source systems for radioactive beams at GANIL
NASA Astrophysics Data System (ADS)
Bajeat, O.; Delahaye, P.; Couratin, C.; Dubois, M.; Franberg-Delahaye, H.; Henares, J. L.; Huguet, Y.; Jardin, P.; Lecesne, N.; Lecomte, P.; Leroy, R.; Maunoury, L.; Osmond, B.; Sjodin, M.
2013-12-01
The GANIL facility (Caen, France) is dedicated to the acceleration of heavy ion beams including radioactive beams produced by the Isotope Separation On-Line (ISOL) method at the SPIRAL1 facility. To extend the range of radioactive ion beams available at GANIL, using the ISOL method two projects are underway: SPIRAL1 upgrade and the construction of SPIRAL2. For SPIRAL1, a new target ion source system (TISS) using the VADIS FEBIAD ion source coupled to the SPIRAL1 carbon target will be tested on-line by the end of 2013 and installed in the cave of SPIRAL1 for operation in 2015. The SPIRAL2 project is under construction and is being design for using different production methods as fission, fusion or spallation reactions to cover a large area of the chart of nuclei. It will produce among others neutron rich beams obtained by the fission of uranium induced by fast neutrons. The production target made from uranium carbide and heated at 2000 °C will be associated with several types of ion sources. Developments currently in progress at GANIL for each of these projects are presented.
Light distribution modulated diffuse reflectance spectroscopy.
Huang, Pin-Yuan; Chien, Chun-Yu; Sheu, Chia-Rong; Chen, Yu-Wen; Tseng, Sheng-Hao
2016-06-01
Typically, a diffuse reflectance spectroscopy (DRS) system employing a continuous wave light source would need to acquire diffuse reflectances measured at multiple source-detector separations for determining the absorption and reduced scattering coefficients of turbid samples. This results in a multi-fiber probe structure and an indefinite probing depth. Here we present a novel DRS method that can utilize a few diffuse reflectances measured at one source-detector separation for recovering the optical properties of samples. The core of innovation is a liquid crystal (LC) cell whose scattering property can be modulated by the bias voltage. By placing the LC cell between the light source and the sample, the spatial distribution of light in the sample can be varied as the scattering property of the LC cell modulated by the bias voltage, and this would induce intensity variation of the collected diffuse reflectance. From a series of Monte Carlo simulations and phantom measurements, we found that this new light distribution modulated DRS (LDM DRS) system was capable of accurately recover the absorption and scattering coefficients of turbid samples and its probing depth only varied by less than 3% over the full bias voltage variation range. Our results suggest that this LDM DRS platform could be developed to various low-cost, efficient, and compact systems for in-vivo superficial tissue investigation.
Light distribution modulated diffuse reflectance spectroscopy
Huang, Pin-Yuan; Chien, Chun-Yu; Sheu, Chia-Rong; Chen, Yu-Wen; Tseng, Sheng-Hao
2016-01-01
Typically, a diffuse reflectance spectroscopy (DRS) system employing a continuous wave light source would need to acquire diffuse reflectances measured at multiple source-detector separations for determining the absorption and reduced scattering coefficients of turbid samples. This results in a multi-fiber probe structure and an indefinite probing depth. Here we present a novel DRS method that can utilize a few diffuse reflectances measured at one source-detector separation for recovering the optical properties of samples. The core of innovation is a liquid crystal (LC) cell whose scattering property can be modulated by the bias voltage. By placing the LC cell between the light source and the sample, the spatial distribution of light in the sample can be varied as the scattering property of the LC cell modulated by the bias voltage, and this would induce intensity variation of the collected diffuse reflectance. From a series of Monte Carlo simulations and phantom measurements, we found that this new light distribution modulated DRS (LDM DRS) system was capable of accurately recover the absorption and scattering coefficients of turbid samples and its probing depth only varied by less than 3% over the full bias voltage variation range. Our results suggest that this LDM DRS platform could be developed to various low-cost, efficient, and compact systems for in-vivo superficial tissue investigation. PMID:27375931
NASA Astrophysics Data System (ADS)
Watanabe, Yuuki; Kawase, Kodo; Ikari, Tomofumi; Ito, Hiromasa; Ishikawa, Youichi; Minamide, Hiroaki
2003-10-01
We separated the component spatial patterns of frequency-dependent absorption in chemicals and frequency-independent components such as plastic, paper, and measurement noise in terahertz (THz) spectroscopic images, using known spectral curves. Our measurement system, which uses a widely tunable coherent THz-wave parametric oscillator source, can image at a specific frequency in the range 1-2 THz. The component patterns of chemicals can easily be extracted by use of the frequency-independent components. This method could be successfully used for nondestructive inspection for the detection of illegal drugs and devices of bioterrorism concealed, e.g., inside mail and packages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Consonni, Stefano, E-mail: stefano.consonni@polimi.it; Giugliano, Michele; Massarutto, Antonio
Highlights: > The source separation level (SSL) of waste management system does not qualify adequately the system. > Separately collecting organic waste gives less advantages than packaging materials. > Recycling packaging materials (metals, glass, plastics, paper) is always attractive. > Composting and anaerobic digestion of organic waste gives questionable outcomes. > The critical threshold of optimal recycling seems to be a SSL of 50%. - Abstract: This paper describes the context, the basic assumptions and the main findings of a joint research project aimed at identifying the optimal breakdown between material recovery and energy recovery from municipal solid waste (MSW)more » in the framework of integrated waste management systems (IWMS). The project was carried out from 2007 to 2009 by five research groups at Politecnico di Milano, the Universities of Bologna and Trento, and the Bocconi University (Milan), with funding from the Italian Ministry of Education, University and Research (MIUR). Since the optimization of IWMSs by analytical methods is practically impossible, the search for the most attractive strategy was carried out by comparing a number of relevant recovery paths from the point of view of mass and energy flows, technological features, environmental impact and economics. The main focus has been on mature processes applicable to MSW in Italy and Europe. Results show that, contrary to a rather widespread opinion, increasing the source separation level (SSL) has a very marginal effects on energy efficiency. What does generate very significant variations in energy efficiency is scale, i.e. the size of the waste-to-energy (WTE) plant. The mere value of SSL is inadequate to qualify the recovery system. The energy and environmental outcome of recovery depends not only on 'how much' source separation is carried out, but rather on 'how' a given SSL is reached.« less
Transverse Dimensions of Chorus in the Source Region
NASA Technical Reports Server (NTRS)
Santolik, O.; Gurnett, D. A.
2003-01-01
We report measurement of whistler-mode chorus by the four Cluster spacecraft at close separations. We focus our analysis on the generation region close to the magnetic equatorial plane at a radial distance of 4.4 Earth's radii. We use both linear and rank correlation analysis to define perpendicular dimensions of the sources of chorus elements below one half of the electron cyclotron frequency. Correlation is significant throughout the range of separation distances of 60-260 km parallel to the field line and 7-100 km in the perpendicular plane. At these scales, the correlation coefficient is independent for parallel separations, and decreases with perpendicular separation. The observations are consistent with a statistical model of the source region assuming individual sources as gaussian peaks of radiated power with a common half-width of 35 km perpendicular to the magnetic field. This characteristic scale is comparable to the wavelength of observed waves.
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
Ion-exchange chromatography purification of extracellular vesicles.
Kosanović, Maja; Milutinović, Bojana; Goč, Sanja; Mitić, Ninoslav; Janković, Miroslava
2017-08-01
Despite numerous studies, isolating pure preparations of extracellular vesicles (EVs) has proven challenging. Here, we compared ion-exchange chromatography (IEC) to the widely used sucrose density gradient (SDG) centrifugation method for the purification of EVs. EVs in bulk were isolated from pooled normal human amniotic fluid (AF) by differential centrifugation followed by IEC or sucrose density gradient separation. The purity of the isolated EVs was evaluated by electrophoresis and lectin blotting/immuno blotting to monitor the distribution of total proteins, different EVs markers, and selected N-glycans. Our data showed efficient separation of negatively charged EVs from other differently charged molecules, while comparative profiling of EVs using SDG centrifugation confirmed anion-exchange chromatography is advantageous for EV purification. Finally, although this IEC-based method was validated using AF, the approach should be readily applicable to isolation of EVs from other sources as well.
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Rapid fusion method for the determination of refractory thorium and uranium isotopes in soil samples
Maxwell, Sherrod L.; Hutchison, Jay B.; McAlister, Daniel R.
2015-02-14
Recently, approximately 80% of participating laboratories failed to accurately determine uranium isotopes in soil samples in the U.S Department of Energy Mixed Analyte Performance Evaluation Program (MAPEP) Session 30, due to incomplete dissolution of refractory particles in the samples. Failing laboratories employed acid dissolution methods, including hydrofluoric acid, to recover uranium from the soil matrix. The failures illustrate the importance of rugged soil dissolution methods for the accurate measurement of analytes in the sample matrix. A new rapid fusion method has been developed by the Savannah River National Laboratory (SRNL) to prepare 1-2 g soil sample aliquots very quickly, withmore » total dissolution of refractory particles. Soil samples are fused with sodium hydroxide at 600 ºC in zirconium crucibles to enable complete dissolution of the sample. Uranium and thorium are separated on stacked TEVA and TRU extraction chromatographic resin cartridges, prior to isotopic measurements by alpha spectrometry on cerium fluoride microprecipitation sources. Plutonium can also be separated and measured using this method. Batches of 12 samples can be prepared for measurement in <5 hours.« less
Hydrogen isotope separation utilizing bulk getters
Knize, R.J.; Cecchi, J.L.
1991-08-20
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen. 4 figures.
Hydrogen isotope separation utilizing bulk getters
Knize, Randall J.; Cecchi, Joseph L.
1991-01-01
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.
Hydrogen isotope separation utilizing bulk getters
Knize, Randall J.; Cecchi, Joseph L.
1990-01-01
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.
Decay pattern of the Pygmy Dipole Resonance in 130Te
NASA Astrophysics Data System (ADS)
Isaak, J.; Beller, J.; Fiori, E.; Krtička, M.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Scheck, M.; Silva, J.; Sonnabend, K.; Tonchev, A.; Tornow, W.; Weller, H.; Zweidinger, M.
2014-03-01
The electric dipole strength distribution in 130Te has been investigated using the method of Nuclear Resonance Fluorescence. The experiments were performed at the Darmstadt High Intensity Photon Setup using bremsstrahlung as photon source and at the High Intensity overrightarrow γ -Ray Source, where quasi-monochromatic and polarized photon beams are provided. Average decay properties of 130Te below the neutron separation energy are determined. Comparing the experimental data to the predictions of the statistical model indicate, that nuclear structure effects play an important role even at sufficiently high excitation energies. Preliminary results will be presented.
STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission
NASA Astrophysics Data System (ADS)
Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.
2018-05-01
STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.
A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling.
Verberne, Anthony J M
2016-10-19
Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure.
Method of enhancing cyclotron beam intensity
Hudson, Ed D.; Mallory, Merrit L.
1977-01-01
When an easily ionized support gas such as xenon is added to the cold cathode in sources of the Oak Ridge Isochronous Cyclotron, large beam enhancements are produced. For example, .sup.20 Ne.sup.7+ is increased from 0.05 enA to 27 enA, and .sup.16 O.sup.5+ intensities in excess of 35 e.mu.A have been extracted for periods up to 30 minutes. Approximately 0.15 cc/min of the easily ionized support gas is supplied to the ion source through a separate gas feed line and the primary gas flow is reduced by about 30%.
Multi-wavelength time-coincident optical communications system and methods thereof
NASA Technical Reports Server (NTRS)
Lekki, John (Inventor); Nguyen, Quang-Viet (Inventor)
2009-01-01
An optical communications transmitter includes a oscillator source, producing a clock signal, a data source, producing a data signal, a modulating circuit for modulating the clock signal using the data signal to produce modulating signals, optical drivers, receiving the modulating signals and producing optical driving signals based on the modulating signals and optical emitters, producing small numbers of photons based on the optical driving signals. The small numbers of photons are time-correlated between at least two separate optical transmission wavelengths and quantum states and the small number of photons can be detected by a receiver to reform the data signal.
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission
NASA Astrophysics Data System (ADS)
Chen, Xi
The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.
Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model
NASA Astrophysics Data System (ADS)
Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.
2000-02-01
The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.
Source-Type Identification Analysis Using Regional Seismic Moment Tensors
NASA Astrophysics Data System (ADS)
Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.
2012-12-01
Waveform inversion to determine the seismic moment tensor is a standard approach in determining the source mechanism of natural and manmade seismicity, and may be used to identify, or discriminate different types of seismic sources. The successful applications of the regional moment tensor method at the Nevada Test Site (NTS) and the 2006 and 2009 North Korean nuclear tests (Ford et al., 2009a, 2009b, 2010) show that the method is robust and capable for source-type discrimination at regional distances. The well-separated populations of explosions, earthquakes and collapses on a Hudson et al., (1989) source-type diagram enables source-type discrimination; however the question remains whether or not the separation of events is universal in other regions, where we have limited station coverage and knowledge of Earth structure. Ford et al., (2012) have shown that combining regional waveform data and P-wave first motions removes the CLVD-isotropic tradeoff and uniquely discriminating the 2009 North Korean test as an explosion. Therefore, including additional constraints from regional and teleseismic P-wave first motions enables source-type discrimination at regions with limited station coverage. We present moment tensor analysis of earthquakes and explosions (M6) from Lop Nor and Semipalatinsk test sites for station paths crossing Kazakhstan and Western China. We also present analyses of smaller events from industrial sites. In these sparse coverage situations we combine regional long-period waveforms, and high-frequency P-wave polarity from the same stations, as well as from teleseismic arrays to constrain the source type. Discrimination capability with respect to velocity model and station coverage is examined, and additionally we investigate the velocity model dependence of vanishing free-surface traction effects on seismic moment tensor inversion of shallow sources and recovery of explosive scalar moment. Our synthetic data tests indicate that biases in scalar seismic moment and discrimination for shallow sources are small and can be understood in a systematic manner. We are presently investigating the frequency dependence of vanishing traction of a very shallow (10m depth) M2+ chemical explosion recorded at several kilometer distances, and preliminary results indicate at the typical frequency passband we employ the bias does not affect our ability to retrieve the correct source mechanism but may affect the retrieval of the correct scalar seismic moment. Finally, we assess discrimination capability in a composite P-value statistical framework.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Scope. 246.100 Section 246.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOURCE SEPARATION FOR... source separation of residential, commercial, and institutional solid wastes. Explicitly excluded are...
Versatile plasma ion source with an internal evaporator
NASA Astrophysics Data System (ADS)
Turek, M.; Prucnal, S.; Drozdziel, A.; Pyszniak, K.
2011-04-01
A novel construction of an ion source with an evaporator placed inside a plasma chamber is presented. The crucible is heated to high temperatures directly by arc discharge, which makes the ion source suitable for substances with high melting points. The compact ion source enables production of intense ion beams for wide spectrum of solid elements with typical separated beam currents of ˜100-150 μA for Al +, Mn +, As + (which corresponds to emission current densities of 15-25 mA/cm 2) for the extraction voltage of 25 kV. The ion source works for approximately 50-70 h at 100% duty cycle, which enables high ion dose implantation. The typical power consumption of the ion source is 350-400 W. The paper presents detailed experimental data (e.g. dependences of ion currents and anode voltages on discharge and filament currents and magnetic flux densities) for Cr, Fe, Al, As, Mn and In. The discussion is supported by results of Monte Carlo method based numerical simulation of ionisation in the ion source.
EGR distribution and fluctuation probe based on CO.sub.2 measurements
Parks, II, James E; Partridge, Jr., William P; Yoo, Ji Hyung
2015-04-07
A diagnostic system having a single-port EGR probe and a method for using the same. The system includes a light source, an EGR probe, a detector and a processor. The light source may provide a combined light beam composed of light from a mid-infrared signal source and a mid-infrared reference source. The signal source may be centered at 4.2 .mu.m and the reference source may be centered at 3.8 .mu.m. The EGR probe may be a single-port probe with internal optics and a sampling chamber with two flow cells arranged along the light path in series. The optics may include a lens for focusing the light beam and a mirror for reflecting the light beam received from a pitch optical cable to a catch optical cable. The signal and reference sources are modulated at different frequencies, thereby allowing them to be separated and the signal normalized by the processor.
Indirect current control with separate IZ drop compensation for voltage source converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanetkar, V.R.; Dawande, M.S.; Dubey, G.K.
1995-12-31
Indirect Current Control (ICC) of boost type Voltage Source Converters (VSCs) using separate compensation of line IZ voltage drop is presented. A separate bi-directional VSC is used to produce the compensation voltage. This simplifies the ICC regulator scheme as the power flow is controlled through single modulation index. Experimental verification is provided for bi-directional control of the power flow.
Independent component analysis algorithm FPGA design to perform real-time blind source separation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke
2015-05-01
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
A rapid phospholipase A2 bioassay using 14C-oleate-labelled E. coli bacterias.
Meyer, T; von Wichert, P; Weins, D
1989-02-01
Two methods of phospholipase A2 determination using 14C-labelled E. coli bacterias as substrate were compared. One method works with a filter membrane for separation of cleaved 14C-oleate from remaining phospholipids, the other uses the well-known thin-layer chromatography for lipid analysis. Some features of human serum phospholipase A2 regarding pH and Ca2+ dependency were investigated. Possible sources of errors were discussed. It was shown that either method can differentiate between normal and pathologically elevated phospholipase A2 levels, but that the filter method is superior in terms of sensitivity and workload.
Separation of concurrent broadband sound sources by human listeners
NASA Astrophysics Data System (ADS)
Best, Virginia; van Schaik, André; Carlile, Simon
2004-01-01
The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.
Numerical simulation of tonal fan noise of computers and air conditioning systems
NASA Astrophysics Data System (ADS)
Aksenov, A. A.; Gavrilyuk, V. N.; Timushev, S. F.
2016-07-01
Current approaches to fan noise simulation are mainly based on the Lighthill equation and socalled aeroacoustic analogy, which are also based on the transformed Lighthill equation, such as the wellknown FW-H equation or the Kirchhoff theorem. A disadvantage of such methods leading to significant modeling errors is associated with incorrect solution of the decomposition problem, i.e., separation of acoustic and vortex (pseudosound) modes in the area of the oscillation source. In this paper, we propose a method for tonal noise simulation based on the mesh solution of the Helmholtz equation for the Fourier transform of pressure perturbation with boundary conditions in the form of the complex impedance. A noise source is placed on the surface surrounding each fan rotor. The acoustic fan power is determined by the acoustic-vortex method, which ensures more accurate decomposition and determination of the pressure pulsation amplitudes in the near field of the fan.
Constraining cosmic curvature by using age of galaxies and gravitational lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, Akshay; Mahajan, Shobhit; Mukherjee, Amitabha
We use two model-independent methods to constrain the curvature of the universe. In the first method, we study the evolution of the curvature parameter (Ω {sub k} {sup 0}) with redshift by using the observations of the Hubble parameter and transverse comoving distances obtained from the age of galaxies. Secondly, we also use an indirect method based on the mean image separation statistics of gravitationally lensed quasars. The basis of this methodology is that the average image separation of lensed images will show a positive, negative or zero correlation with the source redshift in a closed, open or flat universemore » respectively. In order to smoothen the datasets used in both the methods, we use a non-parametric method namely, Gaussian Process (GP). Finally from first method we obtain Ω {sub k} {sup 0} = 0.025±0.57 for a presumed flat universe while the cosmic curvature remains constant throughout the redshift region 0 < z < 1.37 which indicates that the universe may be homogeneous. Moreover, the combined result from both the methods suggests that the universe is marginally closed. However, a flat universe can be incorporated at 3σ level.« less
NASA Astrophysics Data System (ADS)
Bai, Yang; Wan, Xiaohong; Zeng, Ke; Ni, Yinmei; Qiu, Lirong; Li, Xiaoli
2016-12-01
Objective. When prefrontal-transcranial magnetic stimulation (p-TMS) performed, it may evoke hybrid artifact mixed with muscle activity and blink activity in EEG recordings. Reducing this kind of hybrid artifact challenges the traditional preprocessing methods. We aim to explore method for the p-TMS evoked hybrid artifact removal. Approach. We propose a novel method used as independent component analysis (ICA) post processing to reduce the p-TMS evoked hybrid artifact. Ensemble empirical mode decomposition (EEMD) was used to decompose signal into multi-components, then the components were separated with artifact reduced by blind source separation (BSS) method. Three standard BSS methods, ICA, independent vector analysis, and canonical correlation analysis (CCA) were tested. Main results. Synthetic results showed that EEMD-CCA outperformed others as ICA post processing step in hybrid artifacts reduction. Its superiority was clearer when signal to noise ratio (SNR) was lower. In application to real experiment, SNR can be significantly increased and the p-TMS evoked potential could be recovered from hybrid artifact contaminated signal. Our proposed method can effectively reduce the p-TMS evoked hybrid artifacts. Significance. Our proposed method may facilitate future prefrontal TMS-EEG researches.
NASA Astrophysics Data System (ADS)
Klosterhalfen, Anne; Moene, Arnold; Schmidt, Marius; Ney, Patrizia; Graf, Alexander
2017-04-01
Source partitioning of eddy covariance (EC) measurements of CO2 into respiration and photosynthesis is routinely used for a better understanding of the exchange of greenhouse gases, especially between terrestrial ecosystems and the atmosphere. The most frequently used methods are usually based either on relations of fluxes to environmental drivers or on chamber measurements. However, they often depend strongly on assumptions or invasive measurements and do usually not offer partitioning estimates for latent heat fluxes into evaporation and transpiration. SCANLON and SAHU (2008) and SCANLON and KUSTAS (2010) proposed an promising method to estimate the contributions of transpiration and evaporation using measured high frequency time series of CO2 and H2O fluxes - no extra instrumentation necessary. This method (SK10 in the following) is based on the spatial separation and relative strength of sources and sinks of CO2 and water vapor among the sub-canopy and canopy. Assuming that air from those sources and sinks is not yet perfectly mixed before reaching EC sensors, partitioning is estimated based on the separate application of the flux-variance similarity theory to the stomatal and non-stomatal components of the regarded fluxes, as well as on additional assumptions on stomatal water use efficiency (WUE). The CO2 partitioning method after THOMAS et al. (2008) (TH08 in the following) also follows the argument that the dissimilarities of sources and sinks in and below a canopy affect the relation between H2O and CO2 fluctuations. Instead of involving assumptions on WUE, TH08 directly screens their scattergram for signals of joint respiration and evaporation events and applies a conditional sampling methodology. In spite of their different main targets (H2O vs. CO2), both methods can yield partitioning estimates on both fluxes. We therefore compare various sub-methods of SK10 and TH08 including own modifications (e.g., cluster analysis) to each other, to established source partitioning methods, and to chamber measurements at various agroecosystems. Further, profile measurements and a canopy-resolving Large Eddy Simulation model are used to test the assumptions involved in SK10. Scanlon, T.M., Kustas, W.P., 2010. Partitioning carbon dioxide and water vapor fluxes using correlation analysis. Agricultural and Forest Meteorology 150 (1), 89-99. Scanlon, T.M., Sahu, P., 2008. On the correlation structure of water vapor and carbon dioxide in the atmospheric surface layer: A basis for flux partitioning. Water Resources Research 44 (10), W10418, 15 pp. Thomas, C., Martin, J.G., Goeckede, M., Siqueira, M.B., Foken, T., Law, B.E., Loescher H.W., Katul, G., 2008. Estimating daytime subcanopy respiration from conditional sampling methods applied to multi-scalar high frequency turbulence time series. Agricultural and Forest Meteorology 148 (8-9), 1210-1229.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandrosov, V I
2015-10-31
This paper analyses low-coherence tomography of absorbing media with the use of spatially separated counterpropagating object and reference beams. A probe radiation source based on a broadband terahertz (THz) generator that emits sufficiently intense THz waves in the spectral range 90 – 350 μm and a prism spectroscope that separates out eight narrow intervals from this range are proposed for implementing this method. This allows media of interest to be examined by low-coherence tomography with counterpropagating beams in each interval. It is shown that, according to the Rayleigh criterion, the method is capable of resolving inhomogeneities with a size nearmore » one quarter of the coherence length of the probe radiation. In addition, the proposed tomograph configuration allows one to determine the average surface asperity slope and the refractive index and absorption coefficient of inhomogeneities 180 to 700 mm in size, and obtain spectra of such inhomogeneities in order to determine their chemical composition. (laser applications and other topics in quantum electronics)« less
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
Valdes, Claudia P.; Varma, Hari M.; Kristoffersen, Anna K.; Dragojevic, Tanja; Culver, Joseph P.; Durduran, Turgut
2014-01-01
We introduce a new, non-invasive, diffuse optical technique, speckle contrast optical spectroscopy (SCOS), for probing deep tissue blood flow using the statistical properties of laser speckle contrast and the photon diffusion model for a point source. The feasibility of the method is tested using liquid phantoms which demonstrate that SCOS is capable of measuring the dynamic properties of turbid media non-invasively. We further present an in vivo measurement in a human forearm muscle using SCOS in two modalities: one with the dependence of the speckle contrast on the source-detector separation and another on the exposure time. In doing so, we also introduce crucial corrections to the speckle contrast that account for the variance of the shot and sensor dark noises. PMID:25136500
Ada Software Design Methods Formulation.
1982-10-01
cycle organization is also appropriate for another reason. The source material for the case studies is the work of the two contractors who participated in... working version of the system exist. The integration phase takes the pieces developed and combines them into a single working system. Interfaces...hardware, developed separately from the software, is united with the software, and further testing is performed until the system is a working whole
Viscous and Interacting Flow Field Effects.
1980-06-01
in the inviscid flow analysis using free vortex sheets whose shapes are determined by iteration. The outer iteration employs boundary layer...Methods, Inc. which replaces the source distribution in the separation zone by a vortex wake model . This model is described in some detail in (2), but...in the potential flow is obtained using linearly varying vortex singularities distributed on planar panels. The wake is represented by sheets of
USDA-ARS?s Scientific Manuscript database
Noroviruses (NoV) annually cause millions of cases of gastrointestinal disease in the United States. Although NoV outbreaks are generally associated with raw shellfish, particularly oysters, outbreaks have also been known to occur from other common-source food-borne vehicles such as lettuce, frozen...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, C. B.; Gould, A.; Gaudi, B. S.
The mass of the lenses giving rise to Galactic microlensing events can be constrained by measuring the relative lens-source proper motion and lens flux. The flux of the lens can be separated from that of the source, companions to the source, and unrelated nearby stars with high-resolution images taken when the lens and source are spatially resolved. For typical ground-based adaptive optics (AO) or space-based observations, this requires either inordinately long time baselines or high relative proper motions. We provide a list of microlensing events toward the Galactic bulge with high relative lens-source proper motion that are therefore good candidatesmore » for constraining the lens mass with future high-resolution imaging. We investigate all events from 2004 to 2013 that display detectable finite-source effects, a feature that allows us to measure the proper motion. In total, we present 20 events with μ ≳ 8 mas yr{sup –1}. Of these, 14 were culled from previous analyses while 6 are new, including OGLE-2004-BLG-368, MOA-2005-BLG-36, OGLE-2012-BLG-0211, OGLE-2012-BLG-0456, MOA-2012-BLG-532, and MOA-2013-BLG-029. In ≲12 yr from the time of each event the lens and source of each event will be sufficiently separated for ground-based telescopes with AO systems or space telescopes to resolve each component and further characterize the lens system. Furthermore, for the most recent events, comparison of the lens flux estimates from images taken immediately to those estimated from images taken when the lens and source are resolved can be used to empirically check the robustness of the single-epoch method currently being used to estimate lens masses for many events.« less
Wang, Chunlei; Armstrong, Daniel W; Chang, Chau-Dung
2008-06-20
Astaxanthin (3,3'-dihydroxy-beta,beta-carotene-4,4'-dione) is widely used as important colorant in the aquaculture feed industry, and as nutraceuticals in human health products. Synthetic all-trans-astaxanthin consists of a mixture of a pair of enantiomers (3R,3'R and 3S,3'S) and a mesoform (3R,3'S). A high-performance liquid chromatography (HPLC) method for direct, rapid, and baseline separation of three stereoisomers of all-trans-astaxanthin is described for the first time on an immobilized cellulosic column (Chiralpak IC). Enantiomers of two important precursors in the biosynthetic pathway of astaxanthin, adonirubin and adonixanthin, were also directly separated. In addition, the major cis form of astaxanthin (13-cis-astaxanthin) resulted from isomerization was isolated with preparative C18 separation, and the separation of all four stereoisomers of 13-cis-astaxanthin is achieved. Finally, a stereoisomeric purity test of commercial astaxanthin supplements confirmed that they were from a natural source, although their levels were quite low.
Bernstad, Anna; la Cour Jansen, Jes; Aspegren, Henrik
2011-03-01
Through an agreement with EEE producers, Swedish municipalities are responsible for collection of hazardous waste and waste electrical and electronic equipment (WEEE). In most Swedish municipalities, collection of these waste fractions is concentrated to waste recycling centres where households can source-separate and deposit hazardous waste and WEEE free of charge. However, the centres are often located on the outskirts of city centres and cars are needed in order to use the facilities in most cases. A full-scale experiment was performed in a residential area in southern Sweden to evaluate effects of a system for property-close source separation of hazardous waste and WEEE. After the system was introduced, results show a clear reduction in the amount of hazardous waste and WEEE disposed of incorrectly amongst residual waste or dry recyclables. The systems resulted in a source separation ratio of 70 wt% for hazardous waste and 76 wt% in the case of WEEE. Results show that households in the study area were willing to increase source separation of hazardous waste and WEEE when accessibility was improved and that this and similar collection systems can play an important role in building up increasingly sustainable solid waste management systems. Copyright © 2010 Elsevier Ltd. All rights reserved.
Geometry of illumination, luminance contrast, and gloss perception.
Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter
2010-09-01
The influence of both the geometry of illumination and luminance contrast on gloss perception has been examined using the method of paired comparison. Six achromatic glass samples having different lightness were illuminated by two light sources. Only one of these light sources was visible in reflection by the observer. By separate adjustment of the intensity of both light sources, the luminance of both the reflected image and the adjacent off-specular surroundings could be individually varied. It was found that visual gloss appraisal did not correlate with instrumentally measured specular gloss; however, psychometric contrast seemed to be a much better correlate. It has become clear that not only the sample surface characteristics determine gloss perception: the illumination geometry could be an even more important factor.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Silva, Rogers F.; Plis, Sergey M.; Sui, Jing; Pattichis, Marios S.; Adalı, Tülay; Calhoun, Vince D.
2016-01-01
In the past decade, numerous advances in the study of the human brain were fostered by successful applications of blind source separation (BSS) methods to a wide range of imaging modalities. The main focus has been on extracting “networks” represented as the underlying latent sources. While the broad success in learning latent representations from multiple datasets has promoted the wide presence of BSS in modern neuroscience, it also introduced a wide variety of objective functions, underlying graphical structures, and parameter constraints for each method. Such diversity, combined with a host of datatype-specific know-how, can cause a sense of disorder and confusion, hampering a practitioner’s judgment and impeding further development. We organize the diverse landscape of BSS models by exposing its key features and combining them to establish a novel unifying view of the area. In the process, we unveil important connections among models according to their properties and subspace structures. Consequently, a high-level descriptive structure is exposed, ultimately helping practitioners select the right model for their applications. Equipped with that knowledge, we review the current state of BSS applications to neuroimaging. The gained insight into model connections elicits a broader sense of generalization, highlighting several directions for model development. In light of that, we discuss emerging multi-dataset multidimensional (MDM) models and summarize their benefits for the study of the healthy brain and disease-related changes. PMID:28461840
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
Multi-hadron-state contamination in nucleon observables from chiral perturbation theory
NASA Astrophysics Data System (ADS)
Bär, Oliver
2018-03-01
Multi-particle states with additional pions are expected to be a non-negligible source of the excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory (ChPT) can be employed to calculate the contamination due to two-particle nucleon-pion states in various nucleon observables. Results to leading order are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions: the average quark momentum fraction, the helicity and the transversity moment. Taking into account experimental and phenomenological results for the charges and moments the impact of the nucleon-pionstates on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution leads to an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. Existing lattice data is not in conflict with the ChPT predictions, but the comparison suggests that significantly larger source-sink separations are needed to compute the charges and moments with few-percent precision. Talk given at the 35th International Symposium on Lattice Field Theory, 18 - 24 June 2017, Granada, Spain.
Ishikawa, Masayori; Nagase, Naomi; Matsuura, Taeko; Hiratsuka, Junichi; Suzuki, Ryusuke; Miyamoto, Naoki; Sutherland, Kenneth Lee; Fujita, Katsuhisa; Shirato, Hiroki
2015-01-01
Abstract The scintillator with optical fiber (SOF) dosimeter consists of a miniature scintillator mounted on the tip of an optical fiber. The scintillator of the current SOF dosimeter is a 1-mm diameter hemisphere. For a scintillation dosimeter coupled with an optical fiber, measurement accuracy is influenced by signals due to Cerenkov radiation in the optical fiber. We have implemented a spectral filtering technique for compensating for the Cerenkov radiation effect specifically for our plastic scintillator-based dosimeter, using a wavelength-separated counting method. A dichroic mirror was used for separating input light signals. Individual signal counting was performed for high- and low-wavelength light signals. To confirm the accuracy, measurements with various amounts of Cerenkov radiation were performed by changing the incident direction while keeping the Ir-192 source-to-dosimeter distance constant, resulting in a fluctuation of <5%. Optical fiber bending was also addressed; no bending effect was observed for our wavelength-separated SOF dosimeter. PMID:25618136
NASA Astrophysics Data System (ADS)
Iinuma, Takeshi; Hino, Ryota; Uchida, Naoki; Nakamura, Wataru; Kido, Motoyuki; Osada, Yukihito; Miura, Satoshi
2016-11-01
Large interplate earthquakes are often followed by postseismic slip that is considered to occur in areas surrounding the coseismic ruptures. Such spatial separation is expected from the difference in frictional and material properties in and around the faults. However, even though the 2011 Tohoku Earthquake ruptured a vast area on the plate interface, the estimation of high-resolution slip is usually difficult because of the lack of seafloor geodetic data. Here using the seafloor and terrestrial geodetic data, we investigated the postseismic slip to examine whether it was spatially separated with the coseismic slip by applying a comprehensive finite-element method model to subtract the viscoelastic components from the observed postseismic displacements. The high-resolution co- and postseismic slip distributions clarified the spatial separation, which also agreed with the activities of interplate and repeating earthquakes. These findings suggest that the conventional frictional property model is valid for the source region of gigantic earthquakes.
Improved Method for the Qualitative Analyses of Palm Oil Carotenes Using UPLC.
Ng, Mei Han; Choo, Yuen May
2016-04-01
Palm oil is the richest source of natural carotenes, comprising 500-700 ppm in crude palm oil (CPO). Its concentration is found to be much higher in oil extracted from palm-pressed fiber, a by-product from the milling of oil palm fruits. There are 11 types of carotenes in palm oil, excluding the cis/trans isomers of some of the carotenes. Qualitative separation of these individual carotenes is particularly useful for the identification and confirmation of different types of oil as the carotenes profile is unique to each type of vegetable oil. Previous studies on HPLC separation of the individual palm carotenes reported a total analyses time of up to 100 min using C30 stationary phase. In this study, the separation was completed in <5 min. The qualitative separation was successfully carried out using a commonly used stationary phase, C18. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Iinuma, Takeshi; Hino, Ryota; Uchida, Naoki; Nakamura, Wataru; Kido, Motoyuki; Osada, Yukihito; Miura, Satoshi
2016-01-01
Large interplate earthquakes are often followed by postseismic slip that is considered to occur in areas surrounding the coseismic ruptures. Such spatial separation is expected from the difference in frictional and material properties in and around the faults. However, even though the 2011 Tohoku Earthquake ruptured a vast area on the plate interface, the estimation of high-resolution slip is usually difficult because of the lack of seafloor geodetic data. Here using the seafloor and terrestrial geodetic data, we investigated the postseismic slip to examine whether it was spatially separated with the coseismic slip by applying a comprehensive finite-element method model to subtract the viscoelastic components from the observed postseismic displacements. The high-resolution co- and postseismic slip distributions clarified the spatial separation, which also agreed with the activities of interplate and repeating earthquakes. These findings suggest that the conventional frictional property model is valid for the source region of gigantic earthquakes. PMID:27853138
Ceccuzzi, Silvio; Jandieri, Vakhtang; Baccarelli, Paolo; Ponti, Cristina; Schettini, Giuseppe
2016-04-01
Comparison of the beam-shaping effect of a field radiated by a line source, when an ideal infinite structure constituted by two photonic crystals and an actual finite one are considered, has been carried out by means of two different methods. The lattice sums technique combined with the generalized reflection matrix method is used to rigorously investigate the radiation from the infinite photonic crystals, whereas radiation from crystals composed of a finite number of rods along the layers is analyzed using the cylindrical-wave approach. A directive radiation is observed with the line source embedded in the structure. With an increased separation distance between the crystals, a significant edge diffraction appears that provides the main radiation mechanism in the finite layout. Suitable absorbers are implemented to reduce the above-mentioned diffraction and the reflections at the boundaries, thus obtaining good agreement between radiation patterns of a localized line source coupled to finite and infinite photonic crystals, when the number of periods of the finite structure is properly chosen.
NASA Astrophysics Data System (ADS)
Heleno, Sandra; Matias, Magda; Pina, Pedro; Sousa, António Jorge
2016-04-01
A method for semiautomated landslide detection and mapping, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a support vector machine classifier and is tested using a GeoEye-1 multispectral image, sensed 3 days after a major damaging landslide event that occurred on Madeira Island (20 February 2010), and a pre-event lidar digital terrain model. The testing is developed in a 15 km2 wide study area, where 95 % of the number of landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area, with commission errors below 26 % and omission errors below 24 %. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier, east-facing slopes.
Blind source separation in retinal videos
NASA Astrophysics Data System (ADS)
Barriga, Eduardo S.; Truitt, Paul W.; Pattichis, Marios S.; Tüso, Dan; Kwon, Young H.; Kardon, Randy H.; Soliz, Peter
2003-05-01
An optical imaging device of retina function (OID-RF) has been developed to measure changes in blood oxygen saturation due to neural activity resulting from visual stimulation of the photoreceptors in the human retina. The video data that are collected represent a mixture of the functional signal in response to the retinal activation and other signals from undetermined physiological activity. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1.0% of the total reflected intensity level which makes the functional signal difficult to detect by standard methods since it is masked by the other signals that are present. In this paper, we apply principal component analysis (PCA), blind source separation (BSS), using Extended Spatial Decorrelation (ESD) and independent component analysis (ICA) using the Fast-ICA algorithm to extract the functional signal from the retinal videos. The results revealed that the functional signal in a stimulated retina can be detected through the application of some of these techniques.
Acoustic emission testing on an F/A-18 E/F titanium bulkhead
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Van Way, Craig B.; Lockyer, Allen J.; Kudva, Jayanth N.; Ziola, Steve M.
1995-04-01
An important opportunity recently transpired at Northrop Grumman Corporation to instrument an F/A - 18 E/F titanium bulkhead with broad band acoustic emission sensors during a scheduled structural fatigue test. The overall intention of this effort was to investigate the potential for detecting crack propagation using acoustic transmission signals for a large structural component. Key areas of experimentation and experience included (1) acoustic noise characterization, (2) separation of crack signals from extraneous noise, (3) source location accuracy, and (4) methods of acoustic transducer attachment. Fatigue cracking was observed and monitored by strategically placed acoustic emission sensors. The outcome of the testing indicated that accurate source location still remains enigmatic for non-specialist engineering personnel especially at this level of structural complexity. However, contrary to preconceived expectations, crack events could be readily separated from extraneous noise. A further dividend from the investigation materialized in the form of close correspondence between frequency domain waveforms of the bulkhead test specimen tested and earlier work with thick plates.
Spectral method for the static electric potential of a charge density in a composite medium
NASA Astrophysics Data System (ADS)
Bergman, David J.; Farhi, Asaf
2018-04-01
A spectral representation for the static electric potential field in a two-constituent composite medium is presented. A theory is developed for calculating the quasistatic eigenstates of Maxwell's equations for such a composite. The local physical potential field produced in the system by a given source charge density is expanded in this set of orthogonal eigenstates for any position r. The source charges can be located anywhere, i.e., inside any of the constituents. This is shown to work even if the eigenfunctions are normalized in an infinite volume. If the microstructure consists of a cluster of separate inclusions in a uniform host medium, then the quasistatic eigenstates of all the separate isolated inclusions can be used to calculate the eigenstates of the total structure as well as the local potential field. Once the eigenstates are known for a given host and a given microstructure, then calculation of the local field only involves calculating three-dimensional integrals of known functions and solving sets of linear algebraic equations.
An efficient unstructured WENO method for supersonic reactive flows
NASA Astrophysics Data System (ADS)
Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da
2018-03-01
An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.
Assessments of the contribution of land use change to the dust emission in Central Asia
NASA Astrophysics Data System (ADS)
Xi, X.; Sokolik, I. N.
2015-12-01
While the dust emission from arid and semi-arid regions is known as a natural process induced by wind erosion, human may affect the dust emission directly through land use disturbances and indirectly by climate change. There has been much debate on the relative importance of climate change and land use to the global dust budget, as past estimates on the proportion of dust contributed by land use, in particular agricultural practices, remains very uncertain. This to the large extent stems from the way how human-made dust sources are identified and how they are treated in models. This study attempts to assess the land use contribution to the dust emission in Central Asia during 2000-2014 by conducting multiple experiments on the total emission in the WRF-Chem-DuMo model, and applying two methods to separate the natural and anthropogenic sources. The model experiments include realistic treatments of agriculture (e.g., expansion and abandonment) and water body changes (e.g., Aral Sea desiccation) in the land cover map of WRF-Chem-DuMo, but impose no arbitrary labeling of dust source type or adjustment to the erosion threshold. Intercomparison of the model experiments will be focused on the magnitude, interannual variability, and climate sensitivity of dust fluxes resulting from the selections of surface input data and dust flux parameterizations. Based on annual land use intensity maps, the sensitivity of the anthropogenic dust proportion to selection of the threshold value will be evaluated. In conjunction with the empirical method, satellite-derived annual land classifications will be used to track the land cover dynamics, and separate potential human-made source areas.
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05). Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radchenko, Valery; Engle, Jonathan Ward; Medvedev, Dmitri G.
Scandium-44 g (half-life 3.97 h) shows promise for application in positron emission tomography (PET), due to favorable decay parameters. One of the sources of 44gSc is the 44Ti/ 44gSc generator, which can conveniently provide this radioisotope on a daily basis at a diagnostic facility. Titanium-44 (half-life 60.0 a), in turn, can be obtained via proton irradiation of scandium metal targets. A substantial 44Ti product batch, however, requires high beam currents, long irradiation times and an elaborate chemical procedure for 44Ti isolation and purification. This study describes the production of a combined 175 MBq (4.7 mCi) batch yield of 44Ti inmore » week long proton irradiations at the Los Alamos Isotope Production Facility (LANL-IPF) and the Brookhaven Linac Isotope Producer (BNL-BLIP). A two-step ion exchange chromatography based chemical separation method is introduced: first, a coarse separation of 44Ti via anion exchange sorption in concentrated HCl results in a 44Tc/Sc separation factor of 10 2–10 3. A second, cation exchange based step in HCl media is then applied for 44Ti fine purification from residual Sc mass. In conclusion, this method yields a 90–97% 44Ti recovery with an overall Ti/Sc separation factor of ≥10 6.« less
Feasibility of Higher-Order Differential Ion Mobility Separations Using New Asymmetric Waveforms
Shvartsburg, Alexandre A.; Mashkevich, Stefan V.; Smith, Richard D.
2011-01-01
Technologies for separating and characterizing ions based on their transport properties in gases have been around for three decades. The early method of ion mobility spectrometry (IMS) distinguished ions by absolute mobility that depends on the collision cross section with buffer gas atoms. The more recent technique of field asymmetric waveform IMS (FAIMS) measures the difference between mobilities at high and low electric fields. Coupling IMS and FAIMS to soft ionization sources and mass spectrometry (MS) has greatly expanded their utility, enabling new applications in biomedical and nanomaterials research. Here, we show that time-dependent electric fields comprising more than two intensity levels could, in principle, effect an infinite number of distinct differential separations based on the higher-order terms of expression for ion mobility. These analyses could employ the hardware and operational procedures similar to those utilized in FAIMS. Methods up to the 4th or 5th order (where conventional IMS is 1st order and FAIMS is 2nd order) should be practical at field intensities accessible in ambient air, with still higher orders potentially achievable in insulating gases. Available experimental data suggest that higher-order separations should be largely orthogonal to each other and to FAIMS, IMS, and MS. PMID:16494377
NASA Astrophysics Data System (ADS)
Vlachou, Athanasia; Daellenbach, Kaspar R.; Bozzetti, Carlo; Chazeau, Benjamin; Salazar, Gary A.; Szidat, Soenke; Jaffrezo, Jean-Luc; Hueglin, Christoph; Baltensperger, Urs; El Haddad, Imad; Prévôt, André S. H.
2018-05-01
Carbonaceous aerosols are related to adverse human health effects. Therefore, identification of their sources and analysis of their chemical composition is important. The offline AMS (aerosol mass spectrometer) technique offers quantitative separation of organic aerosol (OA) factors which can be related to major OA sources, either primary or secondary. While primary OA can be more clearly separated into sources, secondary (SOA) source apportionment is more challenging because different sources - anthropogenic or natural, fossil or non-fossil - can yield similar highly oxygenated mass spectra. Radiocarbon measurements provide unequivocal separation between fossil and non-fossil sources of carbon. Here we coupled these two offline methods and analysed the OA and organic carbon (OC) of different size fractions (particulate matter below 10 and 2.5 µm - PM10 and PM2.5, respectively) from the Alpine valley of Magadino (Switzerland) during the years 2013 and 2014 (219 samples). The combination of the techniques gave further insight into the characteristics of secondary OC (SOC) which was rather based on the type of SOC precursor and not on the volatility or the oxidation state of OC, as typically considered. Out of the primary sources separated in this study, biomass burning OC was the dominant one in winter, with average concentrations of 5.36 ± 2.64 µg m-3 for PM10 and 3.83 ± 1.81 µg m-3 for PM2.5, indicating that wood combustion particles were predominantly generated in the fine mode. The additional information from the size-segregated measurements revealed a primary sulfur-containing factor, mainly fossil, detected in the coarse size fraction and related to non-exhaust traffic emissions with a yearly average PM10 (PM2.5) concentration of 0.20 ± 0.24 µg m-3 (0.05 ± 0.04 µg m-3). A primary biological OC (PBOC) was also detected in the coarse mode peaking in spring and summer with a yearly average PM10 (PM2.5) concentration of 0.79 ± 0.31 µg m-3 (0.24 ± 0.20 µg m-3). The secondary OC was separated into two oxygenated, non-fossil OC factors which were identified based on their seasonal variability (i.e. summer and winter oxygenated organic carbon, OOC) and a third anthropogenic OOC factor which correlated with fossil OC mainly peaking in winter and spring, contributing on average 13 % ± 7 % (10 % ± 9 %) to the total OC in PM10 (PM2.5). The winter OOC was also connected to anthropogenic sources, contributing on average 13 % ± 13 % (6 % ± 6 %) to the total OC in PM10 (PM2.5). The summer OOC (SOOC), stemming from oxidation of biogenic emissions, was more pronounced in the fine mode, contributing on average 43 % ± 12 % (75 % ± 44 %) to the total OC in PM10 (PM2.5). In total the non-fossil OC significantly dominated the fossil OC throughout all seasons, by contributing on average 75 % ± 24 % to the total OC. The results also suggested that during the cold period the prevailing source was residential biomass burning while during the warm period primary biological sources and secondary organic aerosol from the oxidation of biogenic emissions became important. However, SOC was also formed by aged fossil fuel combustion emissions not only in summer but also during the rest of the year.
Husfeldt, A W; Endres, M I; Salfer, J A; Janni, K A
2012-04-01
Interest in using recycled manure solids (RMS) as a bedding material for dairy cows has grown in the US Midwest. Cost of common bedding materials has increased in recent years and availability has decreased. Information regarding the composition of RMS and its use as a bedding material for dairy cows in the Midwest is very limited. The objectives of this study were to characterize RMS as a bedding material, observe bedding management practices, document methods of obtaining RMS, and describe housing facilities. We visited 38 Midwest dairy operations bedding freestalls with RMS to collect data. Methods of obtaining RMS for bedding included separation of anaerobic digested manure, separation of raw manure, and separation of raw manure followed by mechanical drum-composting for 18 to 24 h. Average bedding moisture of unused RMS was 72.4% with a pH of 9.16. Unused samples contained (on a dry basis) 1.4% N, 44.9% C, 32.7C:N ratio, 0.44% P, 0.70% K, 76.5% neutral detergent fiber, 9.4% ash, 4.4% nonfiber carbohydrates, and 1.1% fat. Moisture was lowest for drum-composted solids before and after use as freestall bedding. After use in the stalls, digested solids had lower neutral detergent fiber content (70.5%) than drum-composted (75.0%) and separated raw (73.1%) solids. Total N content was greater in digested solids (2.0%) than in separated raw (1.7%) solids. Total bacterial populations in unused bedding were greatest in separated raw manure solids but were similar between digested and drum-composted manure solids. Drum-composted manure solids had no coliform bacteria before use as freestall bedding. After use as bedding, digested manure solids had lower total bacteria counts compared with drum-composted and separated raw manure solids, which had similar counts. Used bedding samples of digested solids contained fewer environmental streptococci than drum-composted and separated raw solids and had reduced Bacillus counts compared with separated raw solids. Coliform counts were similar for all 3 bedding sources. Addition of a mechanical blower post-separation and use of a shelter for storage were associated with reduced fresh-bedding moisture but not associated with bacterial counts. This was the first survey of herds using RMS for bedding in the Midwest. We learned that RMS was being used successfully as a source of bedding for dairy cows. For most farms in the study, somatic cell count was comparable to the average in the region and not excessively high. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Apparatus for the liquefaction of natural gas and methods relating to same
Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID; Turner, Terry D [Ammon, ID; Carney, Francis H [Idaho Falls, ID
2009-09-29
An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through an expander creating work output. A compressor may be driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream.
Apparatus for the liquefaction of natural gas and methods relating to same
Turner, Terry D [Ammon, ID; Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID
2009-09-22
An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through an expander creating work output. A compressor may be driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is expanded to liquefy the natural gas. A gas-liquid separator separates a vapor from the liquid natural gas. A portion of the liquid gas is used for additional cooling. Gas produced within the system may be recompressed for reintroduction into a receiving line or recirculation within the system for further processing.
Apparatus for the liquefaction of a gas and methods relating to same
Turner, Terry D [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID
2009-12-29
Apparatuses and methods are provided for producing liquefied gas, such as liquefied natural gas. In one embodiment, a liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream may be sequentially pass through a compressor and an expander. The process stream may also pass through a compressor. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. A portion of the liquid gas may be used for additional cooling. Gas produced within the system may be recompressed for reintroduction into a receiving line.
NASA Astrophysics Data System (ADS)
de Vries, Diemer; Hörchens, Lars; Grond, Peter
2007-12-01
The state of the art of wave field synthesis (WFS) systems is that they can reproduce sound sources and secondary (mirror image) sources with natural spaciousness in a horizontal plane, and thus perform satisfactory 2D auralization of an enclosed space, based on multitrace impulse response data measured or simulated along a 2D microphone array. However, waves propagating with a nonzero elevation angle are also reproduced in the horizontal plane, which is neither physically nor perceptually correct. In most listening environments to be auralized, the floor is highly absorptive since it is covered with upholstered seats, occupied during performances by a well-dressed audience. A first-order ceiling reflection, reaching the floor directly or via a wall, will be severely damped and will not play a significant role in the room response anymore. This means that a spatially correct WFS reproduction of first-order ceiling reflections, by means of a loudspeaker array at the ceiling of the auralization reproduction room, is necessary and probably sufficient to create the desired 3D spatial perception. To determine the driving signals for the loudspeakers in the ceiling array, it is necessary to identify the relevant ceiling reflection(s) in the multichannel impulse response data and separate those events from the data set. Two methods are examined to identify, separate, and reproduce the relevant reflections: application of the Radon transform, and decomposition of the data into cylindrical harmonics. Application to synthesized and measured data shows that both methods in principle are able to identify, separate, and reproduce the relevant events.
Cassava: a basic energy source in the tropics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cock, J.H.
1982-11-19
Cassava (Manihot esculenta) is the fourth most important source of food energy in the tropics. More than two-thirds of the total production of this crop is used as food for humans, with lesser amounts being used for animal feed and industrial purposes. The ingestion of high levels of cassava has been associated with chronic cyanide toxicity in parts of Africa, but this appears to be related to inadequate processing of the root and poor overall nutrition. Although cassava is not a complete food it is important as a cheap source of calories. The crop has a high yield potential undermore » good conditions, and compared to other crops it excels under suboptimal conditions, thus offering the possibility of using marginal land to increase total agricultural production. Breeding programs that bring together germ plasm from different regions coupled with improved agronomic practices can markedly increase yields. The future demand for fresh cassava may depend on improved storage methods. The markets for cassava as a substitute for cereal flours in bakery products and as an energy source in animal feed rations are likely to expand. The use of cassava as a source of ethanol for fuel depends on finding an efficient source of energy for distillation or an improved method of separating ethanol from water. 7 figures, 8 tables.« less
Cassava: a basic energy source in the tropics.
Cock, J H
1982-11-19
Cassava (Manihot esculenta) is the fourth most important source of food energy in the tropics. More than two-thirds of the total production of this crop is used as food for humans, with lesser amounts being used for animal feed and industrial purposes. The ingestion of high levels of cassava has been associated with chronic cyanide toxicity in parts of Africa, but this appears to be related to inadequate processing of the root and poor overall nutrition. Although cassava is not a complete food it is important as a cheap source of calories. The crop has a high yield potential under good conditions, and compared to other crops it excels under suboptimal conditions, thus offering the possibility of using marginal land to increase total agricultural production. Breeding programs that bring together germ plasm from different regions coupled with improved agronomic practices can markedly increase yields. The future demand for fresh cassava may depend on improved storage methods. The markets for cassava as a substitute for cereal flours in bakery products and as an energy source in animal feed rations are likely to expand. The use of cassava as a source of ethanol for fuel depends on finding an efficient source of energy for distillation or an improved method of separating ethanol from water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeberli, W.
1981-04-01
This paper presents a survey of methods, commonly in use or under development, to produce beams of polarized negative ions for injection into accelerators. A short summary recalls how the hyperfine interaction is used to obtain nuclear polarization in beams of atoms. Atomic-beam sources for light ions are discussed. If the best presently known techniques are incorporated in all stages of the source, polarized H/sup -/ and D/sup -/ beams in excess of 10 ..mu..A can probably be achieved. Production of polarized ions from fast (keV) beams of polarized atoms is treated separately for atoms in the H(25) excited statemore » (Lamb-Shift source) and atoms in the H(1S) ground state. The negative ion beam from Lamb-Shift sources has reached a plateau just above 1 ..mu..A, but this beam current is adequate for many applications and the somewhat lower beam current is compensated by other desirable characteristics. Sources using fast polarized ground state atoms are in a stage of intense development. The next sections summarize production of polarized heavy ions by the atomic beam method, which is well established, and by optical pumping, which has recently been demonstrated to yield very large nuclear polarization. A short discussion of proposed ion sources for polarized /sup 3/He/sup -/ ions is followed by some concluding remarks.« less
Benefits of rotational ground motions for planetary seismology
NASA Astrophysics Data System (ADS)
Donner, S.; Joshi, R.; Hadziioannou, C.; Nunn, C.; van Driel, M.; Schmelzbach, C.; Wassermann, J. M.; Igel, H.
2017-12-01
Exploring the internal structure of planetary objects is fundamental to understand the evolution of our solar system. In contrast to Earth, planetary seismology is hampered by the limited number of stations available, often just a single one. Classic seismology is based on the measurement of three components of translational ground motion. Its methods are mainly developed for a larger number of available stations. Therefore, the application of classical seismological methods to other planets is very limited. Here, we show that the additional measurement of three components of rotational ground motion could substantially improve the situation. From sparse or single station networks measuring translational and rotational ground motions it is possible to obtain additional information on structure and source. This includes direct information on local subsurface seismic velocities, separation of seismic phases, propagation direction of seismic energy, crustal scattering properties, as well as moment tensor source parameters for regional sources. The potential of this methodology will be highlighted through synthetic forward and inverse modeling experiments.
Pulsed voltage electrospray ion source and method for preventing analyte electrolysis
Kertesz, Vilmos [Knoxville, TN; Van Berkel, Gary [Clinton, TN
2011-12-27
An electrospray ion source and method of operation includes the application of pulsed voltage to prevent electrolysis of analytes with a low electrochemical potential. The electrospray ion source can include an emitter, a counter electrode, and a power supply. The emitter can include a liquid conduit, a primary working electrode having a liquid contacting surface, and a spray tip, where the liquid conduit and the working electrode are in liquid communication. The counter electrode can be proximate to, but separated from, the spray tip. The power system can supply voltage to the working electrode in the form of a pulse wave, where the pulse wave oscillates between at least an energized voltage and a relaxation voltage. The relaxation duration of the relaxation voltage can range from 1 millisecond to 35 milliseconds. The pulse duration of the energized voltage can be less than 1 millisecond and the frequency of the pulse wave can range from 30 to 800 Hz.
Hsu, John S.
2010-05-18
A method and apparatus in which a stator (11) and a rotor (12) define a primary air gap (20) for receiving AC flux and at least one source (23, 40), and preferably two sources (23, 24, 40) of DC excitation are positioned for inducing DC flux at opposite ends of the rotor (12). Portions of PM material (17, 17a) are provided as boundaries separating PM rotor pole portions from each other and from reluctance poles. The PM poles (18) and the reluctance poles (19) can be formed with poles of one polarity having enlarged flux paths in relation to flux paths for pole portions of an opposite polarity, the enlarged flux paths communicating with a core of the rotor (12) so as to increase reluctance torque produced by the electric machine. Reluctance torque is increased by providing asymmetrical pole faces. The DC excitation can also use asymmetric poles and asymmetric excitation sources. Several embodiments are disclosed with additional variations.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
Wang, Rui; Jin, Xin; Wang, Ziyuan; Gu, Wantao; Wei, Zhechao; Huang, Yuanjie; Qiu, Zhuang; Jin, Pengkang
2018-01-01
This paper proposes a new system of multilevel reuse with source separation in printing and dyeing wastewater (PDWW) treatment in order to dramatically improve the water reuse rate to 35%. By analysing the characteristics of the sources and concentrations of pollutants produced in different printing and dyeing processes, special, highly, and less contaminated wastewaters (SCW, HCW, and LCW, respectively) were collected and treated separately. Specially, a large quantity of LCW was sequentially reused at multiple levels to meet the water quality requirements for different production processes. Based on this concept, a multilevel reuse system with a source separation process was established in a typical printing and dyeing enterprise. The water reuse rate increased dramatically to 62%, and the reclaimed water was reused in different printing and dyeing processes based on the water quality. This study provides promising leads in water management for wastewater reclamation. Copyright © 2017 Elsevier Ltd. All rights reserved.
A review of vacuum ARC ion source research at ANSTO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, P.J.; Noorman, J.T.; Watt, G.C.
1996-08-01
The authors talk briefly describes the history and current status of vacuum arc ion source research at the Australian Nuclear Science and Technology Organization (ANSTO). In addition, the author makes some mention of the important role of previous Vacuum Arc Ion Source Workshops in fostering the development of this research field internationally. During the period 1986 - 89, a type of plasma centrifuge known as a vacuum arc centrifuge was developed at ANSTO as part of a research project on stable isotope separation. In this device, a high current vacuum arc discharge was used to produce a metal plasma whichmore » was subsequently rotated in an axial magnetic field. The high rotational speeds (10{sup 5} - 10{sup 6} rad sec{sup {minus}1}) achievable with this method produce centrifugal separation of ions with different mass:charge ratios such as isotopic species. The first portent of things to come occurred in 1985 when Dr. Ian Brown visited ANSTO`s Lucas Heights Research Laboratories and presented a talk on the metal vapour vacuum arc (MEVVA) ion source which had only recently been invented by Brown and co-workers, J. Galvin and R. MacGill, at Lawrence Berkeley Laboratory. For those of us involved in vacuum arc centrifuge research, this was an exciting development primarily because the metal vapour vacuum arc plasma source was common to both devices. Thus, a type of arc, which had since the 1930`s been extensively investigated as a means of switching high current loads, had found wider application as a useful plasma source.« less
A survey of the determination of the platinum group elements.
Kallmann, S
1987-08-01
The platinum-group metals (PGMs), Ru, Rh, Pd, Os, Ir and Pt, are widely used as catalysts in petroleum and chemical processes. They find wide applications in automotive exhaust-gas control converters and are of immense importance to the electronics industry. They are found in many items of jewellery and serve to an increasing extent as a form of investment. The PGMs are extracted in minute quantities from a limited number of ores, found mainly in S. Africa and the USSR. They are concentrated and separated from each other by elaborate chemical processes. Because of their great intrinsic value (Pt $650 per oz; Rh $1400 per oz), the recycling of the PGMs from literally hundreds of different forms of scrap is an essential factor in the overall management of the PGM economy. In this survey emphasis is placed on the need to tailor the analytical method according to (a) the environment in which the PGMs occur, (b) the individual PGM concentrations, and (c) the desired sensitivity and precision. The factors which determine the choice of chemical, physicochemical and/or instrumental approaches are discussed. They are further commented on in extensive presentations of dissolution and separation techniques and methods for the final measurement of individual PGMs. Appendices are provided which present the compositions and sources of the products most frequently encountered in PGM analysis, along with information on methods of decomposition, separations required, type of separation, and final determination.
OGLE-2016-BLG-1469L: Microlensing Binary Composed of Brown Dwarfs
NASA Astrophysics Data System (ADS)
Han, C.; Udalski, A.; Sumi, T.; Gould, A.; Albrow, M. D.; Chung, S.-J.; Jung, Y. K.; Ryu, Y.-H.; Shin, I.-G.; Yee, J. C.; Zhu, W.; Cha, S.-M.; Kim, S.-L.; Kim, D.-J.; Lee, C.-U.; Lee, Y.; Park, B.-G.; KMTNet Collaboration; Soszyński, I.; Mróz, P.; Pietrukowicz, P.; Szymański, M. K.; Skowron, J.; Poleski, R.; Kozłowski, S.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Abe, F.; Asakura, Y.; Bennett, D. P.; Bond, I. A.; Bhattacharya, A.; Donachie, M.; Freeman, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Ohnishi, K.; Oyokawa, H.; Rattenbury, N. J.; Saito, To.; Sharan, A.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Yamada, T.; Yamada, T.; Yonehara, A.; Barry, R.; MOA Collaboration
2017-07-01
We report the discovery of a binary composed of two brown dwarfs, based on the analysis of the microlensing event OGLE-2016-BLG-1469. Thanks to the detection of both finite-source and microlens-parallax effects, we are able to measure both the masses {M}1˜ 0.05 {M}⊙ and {M}2˜ 0.01 {M}⊙ , and the distance {D}{{L}}˜ 4.5 {kpc}, as well as the projected separation {a}\\perp ˜ 0.33 au. This is the third brown-dwarf binary detected using the microlensing method, demonstrating the usefulness of microlensing in detecting field brown-dwarf binaries with separations of less than 1 au.
Mulert, C; Juckel, G; Augustin, H; Hegerl, U
2002-10-01
The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yagi, Mamiko; Ito, Mitsuki; Shirakashi, Jun-ichi, E-mail: shrakash@cc.tuat.ac.jp
We report a new method for fabrication of Ni nanogaps based on electromigration induced by a field emission current. This method is called “activation” and is demonstrated here using a current source with alternately reversing polarities. The activation procedure with alternating current bias, in which the current source polarity alternates between positive and negative bias conditions, is performed with planar Ni nanogaps defined on SiO{sub 2}/Si substrates at room temperature. During negative biasing, a Fowler-Nordheim field emission current flows from the source (cathode) to the drain (anode) electrode. The Ni atoms at the tip of the drain electrode are thusmore » activated and then migrate across the gap from the drain to the source electrode. In contrast, in the positive bias case, the field emission current moves the activated atoms from the source to the drain electrode. These two procedures are repeated until the tunnel resistance of the nanogaps is successively reduced from 100 TΩ to 48 kΩ. Scanning electron microscopy and atomic force microscopy studies showed that the gap separation narrowed from approximately 95 nm to less than 10 nm because of the Ni atoms that accumulated at the tips of both the source and drain electrodes. These results show that the alternately biased activation process, which is a newly proposed atom transfer technique, can successfully control the tunnel resistance of the Ni nanogaps and is a suitable method for formation of ultrasmall nanogap structures.« less
Directly imaging steeply-dipping fault zones in geothermal fields with multicomponent seismic data
Chen, Ting; Huang, Lianjie
2015-07-30
For characterizing geothermal systems, it is important to have clear images of steeply-dipping fault zones because they may confine the boundaries of geothermal reservoirs and influence hydrothermal flow. Elastic reverse-time migration (ERTM) is the most promising tool for subsurface imaging with multicomponent seismic data. However, conventional ERTM usually generates significant artifacts caused by the cross correlation of undesired wavefields and the polarity reversal of shear waves. In addition, it is difficult for conventional ERTM to directly image steeply-dipping fault zones. We develop a new ERTM imaging method in this paper to reduce these artifacts and directly image steeply-dipping fault zones.more » In our new ERTM method, forward-propagated source wavefields and backward-propagated receiver wavefields are decomposed into compressional (P) and shear (S) components. Furthermore, each component of these wavefields is separated into left- and right-going, or downgoing and upgoing waves. The cross correlation imaging condition is applied to the separated wavefields along opposite propagation directions. For converted waves (P-to-S or S-to-P), the polarity correction is applied to the separated wavefields based on the analysis of Poynting vectors. Numerical imaging examples of synthetic seismic data demonstrate that our new ERTM method produces high-resolution images of steeply-dipping fault zones.« less
NASA Astrophysics Data System (ADS)
Bowers, W.; Mercer, J.; Pleasants, M.; Williams, D. G.
2017-12-01
Isotopic partitioning of water within soil into tightly and loosely bound fractions has been proposed to explain differences between isotopic water sources used by plants and those that contribute to streams and ground water, the basis for the "two water worlds" hypothesis. We examined the isotope ratio values of water in trees, bulk soil, mobile water collected from soil lysimeters, stream water, and GW at three different hillslopes in a mixed conifer forest in southeastern Wyoming, USA. Hillslopes differed in aspect and topographic position with corresponding differences in surface energy balance, snowmelt timing, and duration of soil moisture during the dry summer. The isotopic results support the partitioning of water within the soil; trees apparently used a different pool of water for transpiration than that recovered from soil lysimeters and the source was not resolved with the isotopic signature of the water that was extracted from bulk soil via cryogenic vacuum distillation. Separating and measuring the isotope ratios values in these pools would test the assumption that the tightly bound water within the soil has the same isotopic signature as the water transpired by the trees. We employed a centrifugation approach to separate water within the soil held at different tensions by applying stepwise increases in rotational velocity and pressures to the bulk soil samples. Effluent and the remaining water (cryogenically extracted) at each step were compared. We first applied the centrifugation method in a simple lab experiment using sandy loam soil and separate introductions of two isotopically distinct waters. We then applied the method to soil collected from the montane hillslopes. For the lab experiment, we predicted that effluents would have distinct isotopic signatures, with the last effluent and extracted water more closely representing the isotopic signature of the first water applied. For our field samples, we predicted that the isotopic signature of the water discharged in the last centrifuge step and final extraction would more closely represent the isotopic signature of water extracted from trees. Understanding the isotopic partitioning of water within soil is important for interpreting plant water isotope values within the context of the "two water worlds" hypothesis.
Chen, Chih-Chung; Chen, Yu-An; Liu, Yi-Ju; Yao, Da-Jeng
2014-04-21
Microalgae species have great economic importance; they are a source of medicines, health foods, animal feeds, industrial pigments, cosmetic additives and biodiesel. Specific microalgae species collected from the environment must be isolated for examination and further application, but their varied size and culture conditions make their isolation using conventional methods, such as filtration, streaking plate and flow cytometric sorting, labour-intensive and costly. A separation device based on size is one of the most rapid, simple and inexpensive methods to separate microalgae, but this approach encounters major disadvantages of clogging and multiple filtration steps when the size of microalgae varies over a wide range. In this work, we propose a multilayer concentric filter device with varied pore size and is driven by a centrifugation force. The device, which includes multiple filter layers, was employed to separate a heterogeneous population of microparticles into several subpopulations by filtration in one step. A cross-flow to attenuate prospective clogging was generated by altering the rate of rotation instantly through the relative motion between the fluid and the filter according to the structural design of the device. Mixed microparticles of varied size were tested to demonstrate that clogging was significantly suppressed due to a highly efficient separation. Microalgae in a heterogeneous population collected from an environmental soil collection were separated and enriched into four subpopulations according to size in a one step filtration process. A microalgae sample contaminated with bacteria and insect eggs was also tested to prove the decontamination capability of the device.
Zhan, Lu; Xu, Zhenming
2014-12-01
Vacuum metallurgy separation (VMS) is a technically feasible method to recover Pb, Cd and other heavy metals from crushed e-wastes. To further determine the environmental impacts and safety of this method, heavy metals exposure, noise and thermal safety in the ambiance of a vacuum metallurgy separation system are evaluated in this article. The mass concentrations of total suspended particulate (TSP) and PM10 are 0.1503 and 0.0973 mg m(-3) near the facilities. The concentrations of Pb, Cd and Sn in TSP samples are 0.0104, 0.1283 and 0.0961 μg m(-3), respectively. Health risk assessments show that the hazard index of Pb is 3.25 × 10(-1) and that of Cd is 1.09 × 10(-1). Carcinogenic risk of Cd through inhalation is 1.08 × 10(-5). The values of the hazard index and risk indicate that Pb and Cd will not cause non-cancerous effects or carcinogenic risk on workers. The noise sources are mainly the mechanical vacuum pump and the water cooling pump. Both of them have the noise levels below 80 dB (A). The thermal safety assessment shows that the temperatures of the vacuum metallurgy separation system surface are all below 303 K after adopting the circulated water cooling and heat insulation measures. This study provides the environmental information of the vacuum metallurgy separation system, which is of assistance to promote the industrialisation of vacuum metallurgy separation for recovering heavy metals from e-wastes. © The Author(s) 2014.
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
NASA Technical Reports Server (NTRS)
Boggs, S. E.; Lin, R. P.; Coburn, W.; Feffer, P.; Pelling, R. M.; Schroeder, P.; Slassi-Sennou, S.
1997-01-01
The balloon-borne high resolution gamma ray and X-ray germanium spectrometer (HIREGS) was used to observe the Galactic center and two positions along the Galactic plane from Antarctica in January 1995. For its flight, the collimators were configured to measure the Galactic diffuse hard X-ray continuum between 20 and 200 keV by directly measuring the point source contributions to the wide field of view flux for subtraction. The hard X-ray spectra of GX 1+4 and GRO J1655-40 were measured with the diffuse continuum subtracted off. The analysis technique for source separation is discussed and the preliminary separated spectra for these point sources and the Galactic diffuse emission are presented.
Source Separation and Treament of Anthropogenic Urine (WERF Report INFR4SG09b)
Abstract: Anthropogenic urine, although only 1% of domestic wastewater flow, is responsible for 50-80% of the nutrients and a substantial portion of the pharmaceuticals and hormones present in the influent to wastewater treatment plants. Source separation and treatment of urine...
Aeroacoustic model of a modulation fan with pitching blades as a sound generator.
Du, Lin; Jing, Xiaodong; Sun, Xiaofeng; Song, Weihua
2014-10-01
This paper is to develop an aeroacoustic model for a type of modulation fan termed as rotary subwoofer that is capable of radiating low-frequency sound at high sound pressure levels. The rotary subwoofer is modeled as a baffled monopole whose source strength is specified by the fluctuating mass flow rate produced by the pitching blades that rotate at constant speed. An immersed boundary method is established to simulate the detailed unsteady flow around the blades and also to estimate the source strength for the prediction of the far-field sound pressure level (SPL). The numerical simulation shows that the rotary subwoofer can output oscillating air flow that is in phase with the pitching motion of the blades. It is found that flow separation is more likely to occur on the pitching blades at higher modulation frequency, resulting in the reduction of the radiated SPL. Increasing the maximum blade excursion is one of the most effective means to enhance the sound radiation, but this effect can also be compromised by the flow separation. As the modulation frequency increases, correspondingly increasing the rotational speed or using larger blade solidity is beneficial to suppressing the flow separation and thus improving the acoustic performance of the rotary subwoofer.
Eriksson, Ola; Bisaillon, Mattias; Haraldsson, Mårten; Sundberg, Johan
2016-06-15
Management of municipal solid waste is an efficient method to increase resource efficiency, as well as to replace fossil fuels with renewable energy sources due to that (1) waste to a large extent is renewable as it consists of food waste, paper, wood etc. and (2) when energy and materials are recovered from waste treatment, fossil fuels can be substituted. In this paper results from a comprehensive system study of future biological treatment of readily degradable waste in two Swedish regions are presented. Different collection and separation systems for food waste in households have been applied as well as technical improvements of the biogas process as to reduce environmental impact. The results show that central sorting of a mixed fraction into recyclables, combustibles, biowaste and inert is a competitive option compared to source separation. Use of pellets is beneficial compared to direct spreading as fertiliser. Fuel pellets seem to be the most favourable option, which to a large extent depends on the circumstances in the energy system. Separation and utilisation of nitrogen in the wet part of the digestion residue is made possible with a number of technologies which decreases environmental impact drastically, however to a substantial cost in some cases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Separation of Migration and Tomography Modes of Full-Waveform Inversion in the Plane Wave Domain
NASA Astrophysics Data System (ADS)
Yao, Gang; da Silva, Nuno V.; Warner, Michael; Kalinicheva, Tatiana
2018-02-01
Full-waveform inversion (FWI) includes both migration and tomography modes. The migration mode acts like a nonlinear least squares migration to map model interfaces with reflections, while the tomography mode behaves as tomography to build a background velocity model. The migration mode is the main response of inverting reflections, while the tomography mode exists in response to inverting both the reflections and refractions. To emphasize one of the two modes in FWI, especially for inverting reflections, the separation of the two modes in the gradient of FWI is required. Here we present a new method to achieve this separation with an angle-dependent filtering technique in the plane wave domain. We first transform the source and residual wavefields into the plane wave domain with the Fourier transform and then decompose them into the migration and tomography components using the opening angles between the transformed source and residual plane waves. The opening angles close to 180° contribute to the tomography component, while the others correspond to the migration component. We find that this approach is very effective and robust even when the medium is relatively complicated with strong lateral heterogeneities, highly dipping reflectors, and strong anisotropy. This is well demonstrated by theoretical analysis and numerical tests with a synthetic data set and a field data set.
THE DETERMINATION OF TRACES OF BORON IN ZIRCONIUM METAL AND ZIRCONIUM ALLOYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, M.R.; Metcalfe, J.
1962-01-01
A general procedure is given for the determination of B, down to 0.2 ppm, in Zr and Zr alloys. Separation of the B is not necessary, the B-curcumin complex being formed directly in an aliquot of the metal sulfate solution. An interference effect has been noted when analyzing Zr alloys containing Sn. The interference is caused by an insoluble compound of curcumin which separates and has similar properties to the B-curcumin complex. This source of interference is, however, readily eliminated during the procedure for the determination of B. The procedure has been applied to the determination of B in puremore » Zr, zr--0.5% Cu-- 0.5% MO, and Zr--1.5% Sn--0.1% Fe--0.1% Cr--0.05% Ni alloys. Results are comparable with those obtained by methods requiring the separation of the B as methyl borate. (auth)« less
Classical field configurations and infrared slavery
NASA Astrophysics Data System (ADS)
Swanson, Mark S.
1987-09-01
The problem of determining the energy of two spinor particles interacting through massless-particle exchange is analyzed using the path-integral method. A form for the long-range interaction energy is obtained by analyzing an abridged vertex derived from the parent theory. This abridged vertex describes the radiation of zero-momentum particles by pointlike sources. A path-integral formalism for calculating the energy of the radiation field associated with this abridged vertex is developed and applications are made to determine the energy necessary for adiabatic separation of two sources in quantum electrodynamics and for an SU(2) Yang-Mills theory. The latter theory is shown to be consistent with confinement via infrared slavery.
A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling †
Verberne, Anthony J. M.
2016-01-01
Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure. PMID:28952589
An alternate approach to the production of radioisotopes for nuclear medicine applications
NASA Astrophysics Data System (ADS)
D'Auria, John M.; Keller, Roderich; Ladouceur, Keith; Lapi, Suzanne E.; Ruth, Thomas J.; Schmor, Paul
2013-03-01
There is a growing need for the production of radioisotopes for both diagnostic and therapeutic medical applications. Radioisotopes that are produced using the (n,γ) or (γ,n) reactions, however, typically result in samples with low specific activity (radioactivity/gram) due to the high abundance of target material of the same element. One method to effectively remove the isotopic impurity is electro-magnetic mass separation. An Ion Source Test Facility has been constructed at TRIUMF to develop high-intensity, high-efficiency, reliable ion sources for purification of radioactive isotopes, particularly those used in nuclear medicine. In progress studies are presented.
An alternate approach to the production of radioisotopes for nuclear medicine applications.
D'Auria, John M; Keller, Roderich; Ladouceur, Keith; Lapi, Suzanne E; Ruth, Thomas J; Schmor, Paul
2013-03-01
There is a growing need for the production of radioisotopes for both diagnostic and therapeutic medical applications. Radioisotopes that are produced using the (n,γ) or (γ,n) reactions, however, typically result in samples with low specific activity (radioactivity∕gram) due to the high abundance of target material of the same element. One method to effectively remove the isotopic impurity is electro-magnetic mass separation. An Ion Source Test Facility has been constructed at TRIUMF to develop high-intensity, high-efficiency, reliable ion sources for purification of radioactive isotopes, particularly those used in nuclear medicine. In progress studies are presented.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
Chen, L-W Antony; Watson, John G; Chow, Judith C; DuBois, Dave W; Herschberger, Lisa
2011-11-01
Chemical mass balance (CMB) and trajectory receptor models were applied to speciated particulate matter with aerodynamic diameter ≤2.5 μm (PM 2.5 ) measurements from Speciation Trends Network (STN; part of the Chemical Speciation Network [CSN]) and Interagency Monitoring of Protected Visual Environments (IMPROVE) monitoring network across the state of Minnesota as part of the Minnesota PM 2.5 Source Apportionment Study (MPSAS). CMB equations were solved by the Unmix, positive matrix factorization (PMF), and effective variance (EV) methods, giving collective source contribution and uncertainty estimates. Geological source profiles developed from local dust materials were either incorporated into the EV-CMB model or used to verify factors derived from Unmix and PMF. Common sources include soil dust, calcium (Ca)-rich dust, diesel and gasoline vehicle exhausts, biomass burning, secondary sulfate, and secondary nitrate. Secondary sulfate and nitrate aerosols dominate PM 2.5 mass (50-69%). Mobile sources outweigh area sources at urban sites, and vice versa at rural sites due to traffic emissions. Gasoline and diesel contributions can be separated using data from the STN, despite significant uncertainties. Major differences between MPSAS and earlier studies on similar environments appear to be the type and magnitude of stationary sources, but these sources are generally minor (<7%) in this and other studies. Ensemble back-trajectory analysis shows that the lower Midwestern states are the predominant source region for secondary ammoniated sulfate in Minnesota. It also suggests substantial contributions of biomass burning and soil dust from out-of-state on occasions, although a quantitative separation of local and regional contributions was not achieved in the current study. Supplemental materials are available for this article. Go to the publisher's online edition of the Journal of the Air & Waste Management Association for a summary of input data, Unmix and PMF factor profiles, and additional maps. [Box: see text].
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
Dual initiation strip charge apparatus and methods for making and implementing the same
Jakaboski, Juan-Carlos [Albuquerque, NM; Todd,; Steven, N [Rio Rancho, NM; Polisar, Stephen [Albuquerque, NM; Hughs, Chance [Tijeras, NM
2011-03-22
A Dual Initiation Strip Charge (DISC) apparatus is initiated by a single initiation source and detonates a strip of explosive charge at two separate contacts. The reflection of explosively induced stresses meet and create a fracture and breach a target along a generally single fracture contour and produce generally fragment-free scattering and no spallation. Methods for making and implementing a DISC apparatus provide numerous advantages over previous methods of creating explosive charges by utilizing steps for rapid prototyping; by implementing efficient steps and designs for metering consistent, repeatable, and controlled amount of high explosive; and by utilizing readily available materials.
NASA Astrophysics Data System (ADS)
Bantcev, Dmitrii; Ganushkin, Dmitriy; Ekaykin, Alexey; Chistyakov, Kirill
2017-04-01
Stable isotopes investigations were carried out during fieldwork in glacier basins of the Mongun-Taiga (southwestern Tuva) and Tsambagarav (northwestern Mongolia) mountain massifs in July, 2016. These Arid highlands are problematic in the context of provision of water resources, and glaciers here play a large part in nourishment of the rivers. Concentrations of the oxygen 18, deuterium and the mineralization were measured in the samples of meltwater, precipitation, water from streams, ice and snow. Sable isotope method was used for separation of the glacier runoff. Average isotopic characteristics for different water sources, such as glacier ice, snow patches and precipitation, were calculated and the contribution of these sources in total runoff was valued. Isotopic method was also used for estimation of contribution of buried ice meltwater from rock glaciers ice cores.
The two major sources of arsenic exposure used in an arsenic risk assessment are water and diet. The extraction, separation and quantification of individual arsenic species from dietary sources is considered an area of uncertainty within the arsenic risk assessment. The uncertain...
Separation of overlapping dental arch objects using digital records of illuminated plaster casts.
Yadollahi, Mohammadreza; Procházka, Aleš; Kašparová, Magdaléna; Vyšata, Oldřich; Mařík, Vladimír
2015-07-11
Plaster casts of individual patients are important for orthodontic specialists during the treatment process and their analysis is still a standard diagnostical tool. But the growing capabilities of information technology enable their replacement by digital models obtained by complex scanning systems. This paper presents the possibility of using a digital camera as a simple instrument to obtain the set of digital images for analysis and evaluation of the treatment using appropriate mathematical tools of image processing. The methods studied in this paper include the segmentation of overlapping dental bodies and the use of different illumination sources to increase the reliability of the separation process. The circular Hough transform, region growing with multiple seed points, and the convex hull detection method are applied to the segmentation of orthodontic plaster cast images to identify dental arch objects and their sizes. The proposed algorithm presents the methodology of improving the accuracy of segmentation of dental arch components using combined illumination sources. Dental arch parameters and distances between the canines and premolars for different segmentation methods were used as a measure to compare the results obtained. A new method of segmentation of overlapping dental arch components using digital records of illuminated plaster casts provides information with the precision required for orthodontic treatment. The distance between corresponding teeth was evaluated with a mean error of 1.38% and the Dice similarity coefficient of the evaluated dental bodies boundaries reached 0.9436 with a false positive rate [Formula: see text] and false negative rate [Formula: see text].
Lovley, Derek R; Nevin, Kelly
2015-11-03
The invention provides systems and methods for generating organic compounds using carbon dioxide as a source of carbon and electrical current as an energy source. In one embodiment, a reaction cell is provided having a cathode electrode and an anode electrode that are connected to a source of electrical power, and which are separated by a permeable membrane. A biological film is provided on the cathode. The biological film comprises a bacterium that can accept electrons and that can convert carbon dioxide to a carbon-bearing compound and water in a cathode half-reaction. At the anode, water is decomposed to free molecular oxygen and solvated protons in an anode half-reaction. The half-reactions are driven by the application of electrical current from an external source. Compounds that have been produced include acetate, butanol, 2-oxobutyrate, propanol, ethanol, and formate.
Electron energy recovery system for negative ion sources
Dagenhart, W.K.; Stirling, W.L.
1979-10-25
An electron energy recovery system for negative ion sources is provided. The system, employing crossed electric and magnetic fields, separates the electrons from the ions as they are extracted from the ion source plasma generator and before the ions are accelerated to their full energy. With the electric and magnetic fields oriented 90/sup 0/ to each other, the electrons remain at approximately the electrical potential at which they were generated. The electromagnetic forces cause the ions to be accelerated to the full accelerating supply voltage energy while being deflected through an angle of less than 90/sup 0/. The electrons precess out of the accelerating field region into an electron recovery region where they are collected at a small fraction of the full accelerating supply energy. It is possible, by this method, to collect > 90% of the electrons extracted along with the negative ions from a negative ion source beam at < 4% of full energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovley, Derek R.; Nevin, Kelly P.
The invention provides systems and methods for generating organic compounds using carbon dioxide as a source of carbon and electrical current as an energy source. In one embodiment, a reaction cell is provided having a cathode electrode and an anode electrode that are connected to a source of electrical power, and which are separated by a permeable membrane. A biological film is provided on the cathode. The biological film comprises a bacterium that can accept electrons and that can convert carbon dioxide to a carbon-bearing compound and water in a cathode half-reaction. At the anode, water is decomposed to freemore » molecular oxygen and solvated protons in an anode half-reaction. The half-reactions are driven by the application of electrical current from an external source. Compounds that have been produced include acetate, butanol, 2-oxobutyrate, propanol, ethanol, and formate.« less
NASA Astrophysics Data System (ADS)
Wan, Bo; Zhang, Xue-Ying; Chen, Liang; Ge, Hong-Lin; Ma, Fei; Zhang, Hong-Bin; Ju, Yong-Qin; Zhang, Yan-Bin; Li, Yan-Yan; Xu, Xiao-Wei
2015-11-01
A digital pulse shape discrimination system based on a programmable module NI-5772 has been established and tested with an EJ-301 liquid scintillation detector. The module was operated by running programs developed in LabVIEW, with a sampling frequency up to 1.6 GS/s. Standard gamma sources 22Na, 137Cs and 60Co were used to calibrate the EJ-301 liquid scintillation detector, and the gamma response function was obtained. Digital algorithms for the charge comparison method and zero-crossing method have been developed. The experimental results show that both digital signal processing (DSP) algorithms can discriminate neutrons from γ-rays. Moreover, the zero-crossing method shows better n-γ discrimination at 80 keVee and lower, whereas the charge comparison method gives better results at higher thresholds. In addition, the figure-of-merit (FOM) for detectors of two different dimensions were extracted at 9 energy thresholds, and it was found that the smaller detector presented better n-γ separation for fission neutrons. Supported by National Natural Science Foundation of China (91226107, 11305229) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA03030300)